aid
stringlengths
9
15
mid
stringlengths
7
10
abstract
stringlengths
78
2.56k
related_work
stringlengths
92
1.77k
ref_abstract
dict
0901.2094
2107734287
This paper demonstrates fundamental limits of sensor networks for detection problems where the number of hypotheses is exponentially large. Such problems characterize many important applications including detection and classification of targets in a geographical area using a network of seismic sensors, and detecting complex substances with a chemical sensor array. We refer to such applications as large-scale detection problems. Using the insight that these problems share fundamental similarities with the problem of communicating over a noisy channel, we define the “sensing capacity” and lower bound it for a number of sensor network models. The sensing capacity expression differs significantly from the channel capacity due to the fact that for a fixed sensor configuration, codewords are dependent and nonidentically distributed. The sensing capacity provides a bound on the minimal number of sensors required to detect the state of an environment to within a desired accuracy. The results differ significantly from classical detection theory, and provide an intriguing connection between sensor networks and communications. In addition, we discuss the insight that sensing capacity provides for the problem of sensor selection.
We review the main theoretical results presented in this paper. In Section we introduce a simple but useful sensor network model that can be used to model sensing applications such as chemical sensing applications and computer network monitoring. For this model, we define and bound the sensing capacity. The sensing capacity bound differs significantly from the standard channel capacity results, and requires novel arguments to account for the constrained encoding of a sensor network. This is an important observation due to the use of mutual information as a sensor selection heuristic @cite_13 . Our result shows that this is not the correct metric for large-scale detection applications. Extensions are presented to account for non-binary target vectors, target sparsity, and heterogeneous sensors. Plotting the sensing capacity bound, we demonstrate interesting sensing tradeoffs. For example, perhaps counter-intuitively, sensors of shorter range can achieve a desired detection accuracy with fewer measurements than sensors of longer range. Finally, we also compare our sensing capacity bound to simulated sensor network performance.
{ "cite_N": [ "@cite_13" ], "mid": [ "1537690864" ], "abstract": [ "From the Publisher: This timely book presents such a consistent framework for addressing data fusion and sensor management. While the framework and the methods presented are applicable to a wide variety of multi-sensor systems, the book focuses on decentralized systems. The book also describes an actual to robot navigation and presents real data and results. The vehicle makes use of sonar sensors with focus of attention capability." ] }
0901.2094
2107734287
This paper demonstrates fundamental limits of sensor networks for detection problems where the number of hypotheses is exponentially large. Such problems characterize many important applications including detection and classification of targets in a geographical area using a network of seismic sensors, and detecting complex substances with a chemical sensor array. We refer to such applications as large-scale detection problems. Using the insight that these problems share fundamental similarities with the problem of communicating over a noisy channel, we define the “sensing capacity” and lower bound it for a number of sensor network models. The sensing capacity expression differs significantly from the channel capacity due to the fact that for a fixed sensor configuration, codewords are dependent and nonidentically distributed. The sensing capacity provides a bound on the minimal number of sensors required to detect the state of an environment to within a desired accuracy. The results differ significantly from classical detection theory, and provide an intriguing connection between sensor networks and communications. In addition, we discuss the insight that sensing capacity provides for the problem of sensor selection.
In Section we introduce a sensor network model that accounts for contiguity in a sensor's field of view. Contiguity is an essential aspect of many classes of sensors. For example, cameras observe localized regions and seismic sensors sense vibrations from nearby targets. We demonstrate sensing capacity bounds that account for such sensors by extending results about Markov types @cite_20 , and use convex optimization to compute these bounds. The first result in Section assumes the state of the environment is modeled as a one-dimensional vector. In Section we extend this result to the case where the state of the environment is modeled as a two-dimensional grid. While a one-dimensional vector can model sensor network applications such as border security and traffic monitoring, results about two dimensions significantly increase the type of applications described by our models.
{ "cite_N": [ "@cite_20" ], "mid": [ "2154518298" ], "abstract": [ "The method of types is one of the key technical tools in Shannon theory, and this tool is valuable also in other fields. In this paper, some key applications are presented in sufficient detail enabling an interested nonspecialist to gain a working knowledge of the method, and a wide selection of further applications are surveyed. These range from hypothesis testing and large deviations theory through error exponents for discrete memoryless channels and capacity of arbitrarily varying channels to multiuser problems. While the method of types is suitable primarily for discrete memoryless models, its extensions to certain models with memory are also discussed." ] }
0901.2094
2107734287
This paper demonstrates fundamental limits of sensor networks for detection problems where the number of hypotheses is exponentially large. Such problems characterize many important applications including detection and classification of targets in a geographical area using a network of seismic sensors, and detecting complex substances with a chemical sensor array. We refer to such applications as large-scale detection problems. Using the insight that these problems share fundamental similarities with the problem of communicating over a noisy channel, we define the “sensing capacity” and lower bound it for a number of sensor network models. The sensing capacity expression differs significantly from the channel capacity due to the fact that for a fixed sensor configuration, codewords are dependent and nonidentically distributed. The sensing capacity provides a bound on the minimal number of sensors required to detect the state of an environment to within a desired accuracy. The results differ significantly from classical detection theory, and provide an intriguing connection between sensor networks and communications. In addition, we discuss the insight that sensing capacity provides for the problem of sensor selection.
The performance of sensor networks is limited by both sensing resources and non-sensing resources such as communications, computation, and power. One set of results has been obtained by considering the limitations that communications requirements impose on a sensor network. @cite_1 extends the results in @cite_6 to account for the different traffic models that arise in a sensor network. @cite_34 studies network transport capacity for the case of regular sensor networks. @cite_32 studies the impact of computational constraints and power on the communication efficiency of sensor networks. @cite_36 has considered the interaction between transmission rates and power constraints. Another set of results has been obtained by extending results from compression to sensor networks. Distributed source coding @cite_11 , @cite_19 provides limits on the compression of separately encoded correlated sources. @cite_10 applies these results to sensor networks. @cite_40 provides an overview of this area of research.
{ "cite_N": [ "@cite_36", "@cite_1", "@cite_32", "@cite_6", "@cite_19", "@cite_40", "@cite_34", "@cite_10", "@cite_11" ], "mid": [ "2188872234", "2076399305", "2109004672", "2137775453", "2150412388", "2162404506", "2120855076", "2139519968", "2099213070" ], "abstract": [ "", "In this paper we study the transport capacity of a data-gathering wireless sensor network under different communication organizations. In particular, we consider using a flat as well as a hierarchical clustering architecture to realize many-to-one communications. The capacity of the network under this many-to-one data-gathering scenario is reduced compared to random one-to-one communication due to the unavoidable creation of a point of traffic concentration at the data collector receiver. We introduce the overall throughput bound of λ = W n per node, where W is the transmission capacity, and show under what conditions it can be achieved and under what conditions it cannot. When those conditions are not met, we constructively show how λ = Θ(W n) is achieved with high probability as the number of sensors goes to infinity. We also show how the introduction of clustering can improve the throughput. We discuss the trade-offs between achieving capacity and energy consumption, how transport capacity might be affected by considering in-network processing and the implications this study has on the design of practical protocols for large-scale data-gathering wireless sensor networks.", "Motivated by limited computational resources in sensor nodes, the impact of complexity constraints on the communication efficiency of sensor networks is studied. A single-parameter characterization of processing limitation of nodes in sensor networks is invoked. Specifically, the relaying nodes are assumed to \"donate\" only a small part of their total processor time to relay other nodes information. The amount of donated processor time is modelled by the node's ability to decode a channel code reliably at given rate R. Focusing on a four node network, with two relays, prior work for a complexity constrained single relay network is built upon. In the proposed coding scheme, the transmitter sends a broadcast code such that the relays decode only the \"coarse\" information, and assist the receiver in removing ambiguity only in that information. Via numerical examples, the impact of different power constraints in the system, ranging from per node power bound to network wide power constraint is explored. As the complexity bound R increases, the proposed scheme becomes identical to the recently proposed achievable rate by Gupta & Kumar (2003). Both discrete memoryless and Gaussian channels are considered.", "When n identical randomly located nodes, each capable of transmitting at W bits per second and using a fixed range, form a wireless network, the throughput spl lambda (n) obtainable by each node for a randomly chosen destination is spl Theta (W spl radic (nlogn)) bits per second under a noninterference protocol. If the nodes are optimally placed in a disk of unit area, traffic patterns are optimally assigned, and each transmission's range is optimally chosen, the bit-distance product that can be transported by the network per second is spl Theta (W spl radic An) bit-meters per second. Thus even under optimal circumstances, the throughput is only spl Theta (W spl radic n) bits per second for each node for a destination nonvanishingly far away. Similar results also hold under an alternate physical model where a required signal-to-interference ratio is specified for successful receptions. Fundamentally, it is the need for every node all over the domain to share whatever portion of the channel it is utilizing with nodes in its local neighborhood that is the reason for the constriction in capacity. Splitting the channel into several subchannels does not change any of the results. Some implications may be worth considering by designers. Since the throughput furnished to each user diminishes to zero as the number of users is increased, perhaps networks connecting smaller numbers of users, or featuring connections mostly with nearby neighbors, may be more likely to be find acceptance.", "Let (X_ k , Y_ k ) ^ _ k=1 be a sequence of independent drawings of a pair of dependent random variables X, Y . Let us say that X takes values in the finite set X . It is desired to encode the sequence X_ k in blocks of length n into a binary stream of rate R , which can in turn be decoded as a sequence X _ k , where X _ k X , the reproduction alphabet. The average distortion level is (1 n) ^ n _ k=1 E[D(X_ k , X _ k )] , where D(x, x ) 0, x X , x X , is a preassigned distortion measure. The special assumption made here is that the decoder has access to the side information Y_ k . In this paper we determine the quantity R (d) , defined as the infimum ofrates R such that (with > 0 arbitrarily small and with suitably large n )communication is possible in the above setting at an average distortion level (as defined above) not exceeding d + . The main result is that R (d) = [I(X;Z) - I(Y;Z)] , where the infimum is with respect to all auxiliary random variables Z (which take values in a finite set Z ) that satisfy: i) Y,Z conditionally independent given X ; ii) there exists a function f: Y Z X , such that E[D(X,f(Y,Z))] d . Let R_ X | Y (d) be the rate-distortion function which results when the encoder as well as the decoder has access to the side information Y_ k . In nearly all cases it is shown that when d > 0 then R (d) > R_ X|Y (d) , so that knowledge of the side information at the encoder permits transmission of the X_ k at a given distortion level using a smaller transmission rate. This is in contrast to the situation treated by Slepian and Wolf [5] where, for arbitrarily accurate reproduction of X_ k , i.e., d = for any >0 , knowledge of the side information at the encoder does not allow a reduction of the transmission rate.", "In recent years, sensor research has been undergoing a quiet revolution, promising to have a significant impact throughout society that could quite possibly dwarf previous milestones in the information revolution. Realizing the great promise of sensor networks requires more than a mere advance in individual technologies. It relies on many components working together in an efficient, unattended, comprehensible, and trustworthy manner. One of the enabling technologies in sensor networks is the distributed source coding (DSC), which refers to the compression of the multiple correlated sensor outputs that does not communicate with each other. DSC allows a many-to-one video coding paradigm that effectively swaps encoder-decoder complexity with respect to conventional video coding, thereby representing a fundamental concept shift in video processing. This article has presented an intensive discussion on two DSC techniques, namely Slepian-Wolf coding and Wyner-Ziv coding. The Slepian and Wolf coding have theoretically shown that separate encoding is as efficient as joint coding for lossless compression in channel coding.", "We study network capacity limits and optimal routing algorithms for regular sensor networks, namely, square and torus grid sensor networks, in both, the static case (no node failures) and the dynamic case (node failures). For static networks, we derive upper bounds on the network capacity and then we characterize and provide optimal routing algorithms whose rate per node is equal to this upper bound, thus, obtaining the exact analytical expression for the network capacity. For dynamic networks, the unreliability of the network is modeled in two ways: a Markovian node failure and an energy based node failure. Depending on the probability of node failure that is present in the network, we propose to use a particular combination of two routing algorithms, the first one being optimal when there are no node failures at all and the second one being appropriate when the probability of node failure is high. The combination of these two routing algorithms defines a family of randomized routing algorithms, each of them being suitable for a given probability of node failure.", "Distributed nature of the sensor network architecture introduces unique challenges and opportunities for collaborative networked signal processing techniques that can potentially lead to significant performance gains. Many evolving low-power sensor network scenarios need to have high spatial density to enable reliable operation in the face of component node failures as well as to facilitate high spatial localization of events of interest. This induces a high level of network data redundancy, where spatially proximal sensor readings are highly correlated. We propose a new way of removing this redundancy in a completely distributed manner, i.e., without the sensors needing to talk, to one another. Our constructive framework for this problem is dubbed DISCUS (distributed source coding using syndromes) and is inspired by fundamental concepts from information theory. We review the main ideas, provide illustrations, and give the intuition behind the theory that enables this framework.We present a new domain of collaborative information communication and processing through the framework on distributed source coding. This framework enables highly effective and efficient compression across a sensor network without the need to establish inter-node communication, using well-studied and fast error-correcting coding algorithms.", "Correlated information sequences ,X_ -1 ,X_0,X_1, and ,Y_ -1 ,Y_0,Y_1, are generated by repeated independent drawings of a pair of discrete random variables X, Y from a given bivariate distribution P_ XY (x,y) . We determine the minimum number of bits per character R_X and R_Y needed to encode these sequences so that they can be faithfully reproduced under a variety of assumptions regarding the encoders and decoders. The results, some of which are not at all obvious, are presented as an admissible rate region R in the R_X - R_Y plane. They generalize a similar and well-known result for a single information sequence, namely R_X H (X) for faithful reproduction." ] }
0901.2094
2107734287
This paper demonstrates fundamental limits of sensor networks for detection problems where the number of hypotheses is exponentially large. Such problems characterize many important applications including detection and classification of targets in a geographical area using a network of seismic sensors, and detecting complex substances with a chemical sensor array. We refer to such applications as large-scale detection problems. Using the insight that these problems share fundamental similarities with the problem of communicating over a noisy channel, we define the “sensing capacity” and lower bound it for a number of sensor network models. The sensing capacity expression differs significantly from the channel capacity due to the fact that for a fixed sensor configuration, codewords are dependent and nonidentically distributed. The sensing capacity provides a bound on the minimal number of sensors required to detect the state of an environment to within a desired accuracy. The results differ significantly from classical detection theory, and provide an intriguing connection between sensor networks and communications. In addition, we discuss the insight that sensing capacity provides for the problem of sensor selection.
The problem of estimating a continuous field using a sensor network is an active area of research. @cite_2 considers the relationship of transport capacity and the rate distortion function of a continuous random processes. @cite_15 proves limits on the estimation of an inhomogeneous random fields using sensor that collect noisy point samples. Other work on the problem of estimating a continuous random field includes @cite_12 , @cite_5 , @cite_21 , @cite_26 . @cite_33 considers the estimation of continuous parameters of a set of underlying random processes through a noisy communications channel. The results presented in this paper consider the detection of a discrete state of an environment. We do not consider extensions to environments with a continuous state.
{ "cite_N": [ "@cite_26", "@cite_33", "@cite_21", "@cite_2", "@cite_5", "@cite_15", "@cite_12" ], "mid": [ "2133489575", "2150497041", "2167484620", "2111224874", "1597674521", "2119102657", "1542464532" ], "abstract": [ "Distributed sampling and reconstruction of a physical field using an array of sensors is a problem of considerable interest in environmental monitoring applications of sensor networks. Our recent work has focused on the sampling of bandlimited sensor fields. However, sensor fields are not perfectly bandlimited but typically have rapidly decaying spectra. In a classical sampling set-up it is possible to precede the A D sampling operation with an appropriate analog anti-aliasing filter. However, in the case of sensor networks, this is infeasible since sampling must precede filtering. We show that even though the effects of aliasing on the reconstruction cannot be prevented due to the \"filter-less\" sampling constraint, they can be suitably controlled by oversampling and carefully reconstructing the field from the samples. We show using a dither-based scheme that it is possible to estimate non-bandlimited fields with a precision that depends on how fast the spectral content of the field decays. We develop a framework for analyzing non-bandlimited fields that lead to upper bounds on the maximum pointwise error for a spatial bit rate of R bits meter. We present results for fields with exponentially decaying spectra as an illustration. In particular, we show that for fields f(t) with exponential tails; i.e., F( spl omega ) < spl pi spl alpha sup - spl alpha | spl omega | , the maximum pointwise error decays as c2e sup -a sub 1 spl radic R+ c3 1 spl radic R (e sup -2a sub 1 spl radic R) with spatial bit rate R bits meter. Finally, we show that for fields with spectra that have a finite second moment, the distortion decreases as O((1 N) sup 2 3 ) as the density of sensors, N, scales up to infinity . We show that if D is the targeted non-zero distortion, then the required (finite) rate R scales as O (1 spl radic D log 1 D).", "For a class of sensor networks, the task is to monitor an underlying physical phenomenon over space and time through an imperfect observation process. The sensors can communicate back to a central data collector over a noisy channel. The key parameters in such a setting are the fidelity (or distortion) at which the underlying physical phenomenon can be estimated by the data collector, and the cost of operating the sensor network. This is a network joint source-channel communication problem, involving both compression and communication. It is well known that these two tasks may not be addressed separately without sacrificing optimality, and the optimal performance is generally unknown. This paper presents a lower bound on the best achievable end-to-end distortion as a function of the number of sensors, their total transmit power, the number of degrees of freedom of the underlying source process, and the spatio-temporal communication bandwidth. Particular coding schemes are studied, and it is shown that in some cases, the lower bound is tight in a scaling-law sense. By contrast, it is shown that the standard practice of separating source from channel coding may incur an exponential penalty in terms of communication resources, as a function of the number of sensors. Hence, such code designs effectively prevent scalability. Finally, it is outlined how the results extend to cases involving missing synchronization and channel fading.", "Sensing, processing and communication must be jointly optimized for efficient operation of resource-limited wireless sensor networks. We propose a novel source-channel matching approach for distributed field estimation that naturally integrates these basic operations and facilitates a unified analysis of the impact of key parameters (number of nodes, power, field complexity) on estimation accuracy. At the heart of our approach is a distributed source-channel communication architecture that matches the spatial scale of field coherence with the spatial scale of node synchronization for phase-coherent communication: the sensor field is uniformly partitioned into multiple cells and the nodes in each cell coherently communicate simple statistics of their measurements to the destination via a dedicated noisy multiple access channel (MAC). Essentially, the optimal field estimate in each cell is implicitly computed at the destination via the coherent spatial averaging inherent in the MAC, resulting in optimal power-distortion scaling with the number of nodes. In general, smoother fields demand lower per-node power but require node synchronization over larger scales for optimal estimation. In particular, optimal mean-square distortion scaling can be achieved with sub-linear power scaling. Our results also reveal a remarkable power-density tradeoff inherent in our approach: increasing the sensor density reduces the total power required to achieve a desired distortion. A direct consequence is that consistent field estimation is possible, in principle, even with vanishing total power in the limit of high sensor density.", "We consider a problem of broadcast communication in sensor networks, in which samples of a random field are collected at each node, and the goal is for all nodes to obtain an estimate of the entire field within a prescribed distortion value. The main idea we explore in this paper is that of jointly compressing the data generated by different nodes as this information travels over multiple hops, to eliminate correlations in the representation of the sampled field. Our main contributions are: (a) we obtain, using simple network flow concepts, conditions on the rate distortion function of the random field, so as to guarantee that any node can obtain the measurements collected at every other node in the network, quantized to within any prescribed distortion value; and (b) we construct a large class of physically-motivated stochastic models for sensor data, for which we are able to prove that the joint rate distortion function of all the data generated by the whole network grows slower than the bounds found in (a). A truly novel aspect of our work is the tight coupling between routing and source coding, explicitly formulated in a simple and analytically tractable model - to the best of our knowledge, this connection had not been studied before.", "We address the problem of deterministic oversampling of bandlimited sensor fields in a distributed communication-constrained processing environment, where it is desired for a central intelligent unit to reconstruct the sensor field to maximum pointwise accuracy.We show, using a dither-based sampling scheme, that is is possible to accomplish this using minimal inter-sensor communication with the aid of a multitude of low-precision sensors. Furthermore, we show the feasibility of having a flexible tradeoff between the average oversampling rate and the Analog to Digital (A D) quantization precision per sensor sample with respect to achieving exponential accuracy in the number of bits per Nyquist-period, thereby exposing a key underpinning \"conservation of bits\" principle. That is, we can distribute the bit budget per Nyquist-period along the amplitude-axis (precision of A D converter) and space (or time or space-time) using oversampling in an almost arbitrary discrete-valued manner, while retaining the same reconstruction error decay profile. Interestingly this oversampling is possible in a highly localized communication setting, with only nearest-neighbor communication, making it very attractive for dense sensor networks operating under stringent inter-node communication constraints. Finally we show how our scheme incorporates security as a by-product due to the presence of an underlying dither signal which can be used as a natural encryption device for security. The choice of the dither function enhances the security of the network.", "Sensor networks have emerged as a fundamentally new tool for monitoring spatial phenomena. This paper describes a theory and methodology for estimating inhomogeneous, two-dimensional fields using wireless sensor networks. Inhomogeneous fields are composed of two or more homogeneous (smoothly varying) regions separated by boundaries. The boundaries, which correspond to abrupt spatial changes in the field, are nonparametric one-dimensional curves. The sensors make noisy measurements of the field, and the goal is to obtain an accurate estimate of the field at some desired destination (typically remote from the sensor network). The presence of boundaries makes this problem especially challenging. There are two key questions: 1) Given n sensors, how accurately can the field be estimated? 2) How much energy will be consumed by the communications required to obtain an accurate estimate at the destination? Theoretical upper and lower bounds on the estimation error and energy consumption are given. A practical strategy for estimation and communication is presented. The strategy, based on a hierarchical data-handling and communication architecture, provides a near-optimal balance of accuracy and energy consumption.", "In this paper we investigate the capability of large-scale sensor networks to measure and transport a two-dimensional field. We consider a data-gathering wireless sensor network in which densely deployed sensors take periodic samples of the sensed field, and then scalar quantize, encode and transmit them to a single receiver central controller where snapshot images of the sensed field are reconstructed. The quality of the reconstructed field is limited by the ability of the encoder to compress the data to a rate less than the single-receiver transport capacity of the network. Subject to a constraint on the quality of the reconstructed field, we are interested in how fast data can be collected (or equivalently how closely in time these snapshots can be taken) due to the limitation just mentioned. As the sensor density increases to infinity, more sensors send data to the central controller. However, the data is more correlated, and the encoder can do more compression. The question is: Can the encoder compress sufficiently to meet the limit imposed by the transport capacity? Alternatively, how long does it take to transport one snapshot? We show that as the density increases to infinity, the total number of bits required to attain a given quality also increases to infinity under any compression scheme. At the same time, the single-receiver transport capacity of the network remains constant as the density increases. We therefore conclude that for the given scenario, even though the correlation between sensor data increases as the density increases, any data compression scheme is insufficient to transport the required amount of data for the given quality. Equivalently, the amount of time it takes to transport one snapshot goes to infinity." ] }
0901.1703
2951868322
This paper considers a multi-cell multiple antenna system with precoding used at the base stations for downlink transmission. For precoding at the base stations, channel state information (CSI) is essential at the base stations. A popular technique for obtaining this CSI in time division duplex (TDD) systems is uplink training by utilizing the reciprocity of the wireless medium. This paper mathematically characterizes the impact that uplink training has on the performance of such multi-cell multiple antenna systems. When non-orthogonal training sequences are used for uplink training, the paper shows that the precoding matrix used by the base station in one cell becomes corrupted by the channel between that base station and the users in other cells in an undesirable manner. This paper analyzes this fundamental problem of pilot contamination in multi-cell systems. Furthermore, it develops a new multi-cell MMSE-based precoding method that mitigate this problem. In addition to being a linear precoding method, this precoding method has a simple closed-form expression that results from an intuitive optimization problem formulation. Numerical results show significant performance gains compared to certain popular single-cell precoding methods.
Over the past decade, a variety of aspects of downlink and uplink transmission problems in a single cell setting have been studied. In information theoretic literature, these problems are studied as the broadcast channel (BC) and the multiple access channel (MAC) respectively. For Gaussian BC and general MAC, the problems have been studied for both single and multiple antenna cases. The sum capacity of the multi-antenna Gaussian BC has been shown to be achieved by dirty paper coding (DPC) in @cite_8 @cite_14 @cite_6 @cite_24 . It was shown in @cite_1 that DPC characterizes the full capacity region of the multi-antenna Gaussian BC. These results assume perfect CSI at the base station and the users. In addition, the DPC technique is computationally challenging to implement in practice. There has been significant research focus on reducing the computational complexity at the base station and the users. In this regard, different precoding schemes with low complexity have been proposed. This body of work @cite_0 @cite_25 @cite_16 @cite_30 @cite_29 demonstratesƒs that sum rates close to sum capacity can be achieved with much lower computational complexity. However, these results assume perfect CSI at the base station and the users.
{ "cite_N": [ "@cite_30", "@cite_14", "@cite_8", "@cite_29", "@cite_1", "@cite_6", "@cite_24", "@cite_0", "@cite_16", "@cite_25" ], "mid": [ "2132881059", "2103749601", "2151795416", "2023877378", "2030546921", "2161410889", "", "", "2097578209", "1883796613" ], "abstract": [ "The sum rate capacity of the multi-antenna broadcast channel has recently been computed. However, the search for efficient practical schemes that achieve it is still ongoing. In this paper, we focus on schemes with linear preprocessing of the transmitted data. We propose two criteria for the preceding matrix design: one maximizing the sum rate and the other maximizing the minimum rate among all users. The latter problem is shown to be quasiconvex and is solved exactly via a bisection method. In addition to preceding, we employ a signal scaling scheme that minimizes the average bit-error-rate (BER). The signal scaling scheme is posed as a convex optimization problem, and thus can be solved exactly via efficient interior-point methods. In terms of the achievable sum rate, the proposed technique significantly outperforms traditional channel inversion methods, while having comparable (in fact, often superior) BER performance", "We characterize the sum capacity of the vector Gaussian broadcast channel by showing that the existing inner bound of Marton and the existing upper bound of Sato are tight for this channel. We exploit an intimate four-way connection between the vector broadcast channel, the corresponding point-to-point channel (where the receivers can cooperate), the multiple-access channel (MAC) (where the role of transmitters and receivers are reversed), and the corresponding point-to-point channel (where the transmitters can cooperate).", "A Gaussian broadcast channel (GBC) with r single-antenna receivers and t antennas at the transmitter is considered. Both transmitter and receivers have perfect knowledge of the channel. Despite its apparent simplicity, this model is, in general, a nondegraded broadcast channel (BC), for which the capacity region is not fully known. For the two-user case, we find a special case of Marton's (1979) region that achieves optimal sum-rate (throughput). In brief, the transmitter decomposes the channel into two interference channels, where interference is caused by the other user signal. Users are successively encoded, such that encoding of the second user is based on the noncausal knowledge of the interference caused by the first user. The crosstalk parameters are optimized such that the overall throughput is maximum and, surprisingly, this is shown to be optimal over all possible strategies (not only with respect to Marton's achievable region). For the case of r>2 users, we find a somewhat simpler choice of Marton's region based on ordering and successively encoding the users. For each user i in the given ordering, the interference caused by users j>i is eliminated by zero forcing at the transmitter, while interference caused by users j<i is taken into account by coding for noncausally known interference. Under certain mild conditions, this scheme is found to be throughput-wise asymptotically optimal for both high and low signal-to-noise ratio (SNR). We conclude by providing some numerical results for the ergodic throughput of the simplified zero-forcing scheme in independent Rayleigh fading.", "Block diagonalization (BD) is a precoding technique that eliminates interuser interference in downlink multiuser multiple-input multiple-output (MIMO) systems. With the assumptions that all users have the same number of receive antennas and utilize all receive antennas when scheduled for transmission, the number of simultaneously supportable users with BD is limited by the ratio of the number of base station transmit antennas to the number of user receive antennas. In a downlink MIMO system with a large number of users, the base station may select a subset of users to serve in order to maximize the total throughput. The brute-force search for the optimal user set, however, is computationally prohibitive. We propose two low-complexity suboptimal user selection algorithms for multiuser MIMO systems with BD. Both algorithms aim to select a subset of users such that the total throughput is nearly maximized. The first user selection algorithm greedily maximizes the total throughput, whereas the criterion of the second algorithm is based on the channel energy. We show that both algorithms have linear complexity in the total number of users and achieve around 95 of the total throughput of the complete search method in simulations", "The Gaussian multiple-input multiple-output (MIMO) broadcast channel (BC) is considered. The dirty-paper coding (DPC) rate region is shown to coincide with the capacity region. To that end, a new notion of an enhanced broadcast channel is introduced and is used jointly with the entropy power inequality, to show that a superposition of Gaussian codes is optimal for the degraded vector broadcast channel and that DPC is optimal for the nondegraded case. Furthermore, the capacity region is characterized under a wide range of input constraints, accounting, as special cases, for the total power and the per-antenna power constraints", "We consider a multiuser multiple-input multiple- output (MIMO) Gaussian broadcast channel (BC), where the transmitter and receivers have multiple antennas. Since the MIMO BC is in general a nondegraded BC, its capacity region remains an unsolved problem. We establish a duality between what is termed the \"dirty paper\" achievable region (the Caire-Shamai (see Proc. IEEE Int. Symp. Information Theory, Washington, DC, June 2001, p.322) achievable region) for the MIMO BC and the capacity region of the MIMO multiple-access channel (MAC), which is easy to compute. Using this duality, we greatly reduce the computational complexity required for obtaining the dirty paper achievable region for the MIMO BC. We also show that the dirty paper achievable region achieves the sum-rate capacity of the MIMO BC by establishing that the maximum sum rate of this region equals an upper bound on the sum rate of the MIMO BC.", "", "", "Recent theoretical results describing the sum-capacity when using multiple antennas to communicate with multiple users in a known rich scattering environment have not yet been followed with practical transmission schemes that achieve this capacity. We introduce a simple encoding algorithm that achieves near-capacity at sum-rates of tens of bits channel use. The algorithm is a variation on channel inversion that regularizes the inverse and uses a \"sphere encoder\" to perturb the data to reduce the energy of the transmitted signal. The paper is comprised of two parts. In this second part, we show that, after the regularization of the channel inverse introduced in the first part, a certain perturbation of the data using a \"sphere encoder\" can be chosen to further reduce the energy of the transmitted signal. The performance difference with and without this perturbation is shown to be dramatic. With the perturbation, we achieve excellent performance at all signal-to-noise ratios. The results of both uncoded and turbo-coded simulations are presented.", "In this paper we compare the following two methods of transmit preceding for the multiple antenna broadcast channel: vector perturbation applied to channel inversion (also termed zero forcing or ZF) precoding and scalar Tomlinson-Harashima (TH) precoding applied to sum-rate achieving transmit precoding. Our results indicate that vector perturbation applied to channel inversion preceding can significantly reduce power enhancement and yields the full diversity afforded by the channel to each user. Scalar TH-modulo reduction significantly reduces the power enhancement for precoding based on sum-rate criterion. The solution to vector perturbation applied to ZF precoding requires the solution to an integer optimization problem which is exponentially complex, or an approximation to the integer optimization problem which requires the Lenstra-Lenstra-Lovasz algorithm of polynomial complexity. Instead we propose a simpler solution (an approximation) to the vector perturbation problem based on the Rayleigh-Ritz theorem (R.A. Horn and C.R. Johnson, 1985). This approximate solution achieves the same diversity order as the optimal vector perturbation technique, but suffers a small coding loss. This solution is of polynomial complexity order. Further, a small increase in complexity with a \"sphere\"-based search around this solution yields significantly better performance. Since this vector perturbation is required to be done at the symbol rate, the lower complexity of the proposed algorithm is valuable in practice" ] }
0901.1782
1494256758
We investigate the problem of spreading information contents in a wireless ad hoc network with mechanisms embracing the peer-to-peer paradigm. In our vision, information dissemination should satisfy the following requirements: (i) it conforms to a predefined distribution and (ii) it is evenly and fairly carried by all nodes in their turn. In this paper, we observe the dissemination effects when the information moves across nodes according to two well-known mobility models, namely random walk and random direction. Our approach is fully distributed and comes at a very low cost in terms of protocol overhead; in addition, simulation results show that the proposed solution can achieve the aforementioned goals under different network scenarios, provided that a sufficient number of information replicas are injected into the network. This observation calls for a further step: in the realistic case where the user content demand varies over time, we need a content replication drop strategy to adapt the number of information replicas to the changes in the information query rate. We therefore devise a distributed, lightweight scheme that performs efficiently in a variety of scenarios.
Our study is related to the problem of optimal cache placement in wireless networks. Several works have addressed this issue by exploiting its similarity to the facility location and the @math -median problems. Both these problems are NP-hard and a number of constant-factor approximation algorithms have been proposed for each of them @cite_5 @cite_12 @cite_13 ; these algorithms however are not amenable to an efficient distributed implementation.
{ "cite_N": [ "@cite_5", "@cite_13", "@cite_12" ], "mid": [ "2139841919", "2098653858", "1989289072" ], "abstract": [ "We present approximation algorithms for the metric uncapacitated facility location problem and the metric k -median problem achieving guarantees of 3 and 6 respectively. The distinguishing feature of our algorithms is their low running time: O(m log m ) and O(m log m(L + log ( n ))) respectively, where n and m are the total number of vertices and edges in the underlying complete bipartite graph on cities and facilities. The main algorithmic ideas are a new extension of the primal-dual schema and the use of Lagrangian relaxation to derive approximation algorithms.", "In this paper, we address the problem of efficient cache placement in multi-hop wireless networks. We consider a network comprising a server with an interface to the wired network, and other nodes requiring access to the information stored at the server. In order to reduce access latency in such a communication environment, an effective strategy is caching the server information at some of the nodes distributed across the network. Caching, however, can imply a considerable overhead cost; for instance, disseminating information incurs additional energy as well as bandwidth burden. Since wireless systems are plagued by scarcity of available energy and bandwidth, we need to design caching strategies that optimally trade-off between overhead cost and access latency. We pose our problem as an integer linear program. We show that this problem is the same as a special case of the connected facility location problem, which is known to be NP-hard. We devise a polynomial time algorithm which provides a suboptimal solution. The proposed algorithm applies to any arbitrary network topology and can be implemented in a distributed and asynchronous manner. In the case of a tree topology, our algorithm gives the optimal solution. In the case of an arbitrary topology, it finds a feasible solution with an objective function value within a factor of 6 of the optimal value. This performance is very close to the best approximate solution known today, which is obtained in a centralized manner. We compare the performance of our algorithm against three candidate cache placement schemes, and show via extensive simulation that our algorithm consistently outperforms these alternative schemes.", "We study approximation algorithms for placing replicated data in arbitrary networks. Consider a network of nodes with individual storage capacities and a metric communication cost function, in which each node periodically issues a request for an object drawn from a collection of uniform-length objects. We consider the problem of placing copies of the objects among the nodes such that the average access cost is minimized. Our main result is a polynomial-time constant-factor approximation algorithm for this placement problem. Our algorithm is based on a careful rounding of a linear programming relaxation of the problem. We also show that the data placement problem is MAXSNP-hard. We extend our approximation result to a generalization of the data placement problem that models additional costs such as the cost of realizing the placement. We also show that when object lengths are non-uniform, a constant-factor approximation is achievable if the capacity at each node in the approximate solution is allowed to exceed that in the optimal solution by the length of the largest object." ] }
0901.1782
1494256758
We investigate the problem of spreading information contents in a wireless ad hoc network with mechanisms embracing the peer-to-peer paradigm. In our vision, information dissemination should satisfy the following requirements: (i) it conforms to a predefined distribution and (ii) it is evenly and fairly carried by all nodes in their turn. In this paper, we observe the dissemination effects when the information moves across nodes according to two well-known mobility models, namely random walk and random direction. Our approach is fully distributed and comes at a very low cost in terms of protocol overhead; in addition, simulation results show that the proposed solution can achieve the aforementioned goals under different network scenarios, provided that a sufficient number of information replicas are injected into the network. This observation calls for a further step: in the realistic case where the user content demand varies over time, we need a content replication drop strategy to adapt the number of information replicas to the changes in the information query rate. We therefore devise a distributed, lightweight scheme that performs efficiently in a variety of scenarios.
Distributed algorithms for allocation of information replicas are proposed, among others, in @cite_19 @cite_6 @cite_4 @cite_9 . These solutions typically involve significant communication overhead, especially when applied to mobile environments, and focus on minimizing the information access cost or the query delay. In our work, instead, we consider a cooperative environment and aim at a uniform distribution of the information copies, while evenly distributing the load among the nodes acting as providers.
{ "cite_N": [ "@cite_19", "@cite_9", "@cite_4", "@cite_6" ], "mid": [ "2126105048", "2138368466", "1967217776", "2161450107" ], "abstract": [ "The advances in computer and wireless communication technologies have led to an increasing interest in ad hoc networks which are temporarily constructed by only mobile hosts. In ad hoc networks, since mobile hosts move freely, disconnections occur frequently, and this causes frequent network division. Consequently, data accessibility in ad hoc networks is lower than that in the conventional fixed networks. We propose three replica allocation methods to improve data accessibility by replicating data items on mobile hosts. In these three methods, we take into account the access frequency from mobile hosts to each data item and the status of the network connection. We also show the results of simulation experiments regarding the performance evaluation of our proposed methods.", "We present a family of epidemic algorithms for maintaining replicated database systems. The algorithms are based on the causal delivery of log records where each record corresponds to one transaction instead of one operation. The first algorithm in this family is a pessimistic protocol that ensures serializability and guarantees strict executions. Since we expect the epidemic algorithms to be used in environments with low probability of conflicts among transactions, we develop a variant of the pessimistic algorithm which is optimistic in that transactions commit as soon as they terminate locally and inconsistencies are detected asynchronously as the effects of committed transactions propagate through the system. The last member of the family of epidemic algorithms is pessimistic and uses voting with quorums to resolve conflicts and improve transaction response time. A simulation study evaluates the performance of the protocols.", "In mobile ad hoc networks, nodes move freely and link node failures are common. This leads to frequent network partitions, which may significantly degrade the performance of data access in ad hoc networks. When the network partition occurs, mobile nodes in one network are not able to access data hosted by nodes in other networks. In this paper, we deal with this problem by applying data replication techniques. Existing data replication solutions in both wired or wireless networks aim at either reducing the query delay or improving the data accessibility. As both metrics are important for mobile nodes, we propose schemes to balance the tradeoffs between data accessibility and query delay under different system settings and requirements. Simulation results show that the proposed schemes can achieve a balance between these two metrics and provide satisfying system performance.", "Data caching can significantly improve the efficiency of information access in a wireless ad hoc network by reducing the access latency and bandwidth usage. However, designing efficient distributed caching algorithms is nontrivial when network nodes have limited memory. In this article, we consider the cache placement problem of minimizing total data access cost in ad hoc networks with multiple data items and nodes with limited memory capacity. The above optimization problem is known to be NP-hard. Defining benefit as the reduction in total access cost, we present a polynomial-time centralized approximation algorithm that provably delivers a solution whose benefit is at least 1 4 (1 2 for uniform-size data items) of the optimal benefit. The approximation algorithm is amenable to localized distributed implementation, which is shown via simulations to perform close to the approximation algorithm. Our distributed algorithm naturally extends to networks with mobile nodes. We simulate our distributed algorithm using a network simulator (ns2) and demonstrate that it significantly outperforms another existing caching technique (by Yin and Cao [33]) in all important performance metrics. The performance differential is particularly large in more challenging scenarios such as higher access frequency and smaller memory." ] }
0901.1782
1494256758
We investigate the problem of spreading information contents in a wireless ad hoc network with mechanisms embracing the peer-to-peer paradigm. In our vision, information dissemination should satisfy the following requirements: (i) it conforms to a predefined distribution and (ii) it is evenly and fairly carried by all nodes in their turn. In this paper, we observe the dissemination effects when the information moves across nodes according to two well-known mobility models, namely random walk and random direction. Our approach is fully distributed and comes at a very low cost in terms of protocol overhead; in addition, simulation results show that the proposed solution can achieve the aforementioned goals under different network scenarios, provided that a sufficient number of information replicas are injected into the network. This observation calls for a further step: in the realistic case where the user content demand varies over time, we need a content replication drop strategy to adapt the number of information replicas to the changes in the information query rate. We therefore devise a distributed, lightweight scheme that performs efficiently in a variety of scenarios.
Relevant to our study is also the work in @cite_14 , which computes the (near) optimal number of replicas of video clips in wireless networks, based on the bandwidth required for clip display and their access statistics. However, the strategy proposed in @cite_14 requires a centralized implementation and applies only to strip or grid topologies. In the context of sensor networks, the study in @cite_11 analytically derives the minimum number of sensors that ensure full coverage of an area of interest, under the assumption of a uniform sensor deployment.
{ "cite_N": [ "@cite_14", "@cite_11" ], "mid": [ "193898447", "2127497269" ], "abstract": [ "This study investigates replication of data in a novel streaming architecture consisting of ad-hoc networks of wireless devices. One application of these devices is home-to-home (H2O) entertainment systems where a device collaborates with others to provide each household with on-demand access to a large selection of audio and video clips. These devices are configured with a substantial amount of storage and may cache several clips for future use. A contribution of this study is a technique to compute the number of replicas for a clip based on the square-root of the product of bandwidth required to display clips and their frequency of access , i.e., where . We provide a proof to show this strategy is near optimal when the objective is to maximize the number of simultaneous displays in the system with string and grid (both symmetric and asymmetric) topologies. We say “near optimal” because values of less than 0.5 may be more optimum. In addition, we use analytical and simulation studies to demonstrate its superiority when compared with other alternatives. A second contribution is an analytical model to estimate the theoretical upper bound on the number of simultaneous displays supported by an arbitrary grid topology of H2O devices. This analytical model is useful during capacity planning because it estimates the capabilities of a H2O configuration by considering: the size of an underlying repository, the number of nodes in a H2O cloud, the representative grid topology for this cloud, and the expected available network bandwidth and storage capacity of each device. It shows that one may control the ratio of repository size to the storage capacity of participating nodes in order to enhance system performance. We validate this analytical model with a simulation study and quantify its tradeoffs.", "Sensor networks are often desired to last many times longer than the active lifetime of individual sensors. This is usually achieved by putting sensors to sleep for most of their lifetime. On the other hand, surveillance kind of applications require guaranteed k-coverage of the protected region at all times. As a result, determining the appropriate number of sensors to deploy that achieves both goals simultaneously becomes a challenging problem. In this paper, we consider three kinds of deployments for a sensor network on a unit square - a √n x √n grid, random uniform (for all n points), and Poisson (with density n). In all three deployments, each sensor is active with probability p, independently from the others. Then, we claim that the critical value of the function npπr2 log(np) is 1 for the event of k-coverage of every point. We also provide an upper bound on the window of this phase transition. Although the conditions for the three deployments are similar, we obtain sharper bounds for the random deployments than the grid deployment, which occurs due to the boundary condition. In this paper, we also provide corrections to previously published results for the grid deployment model. Finally, we use simulation to show the usefulness of our analysis in real deployment scenarios." ] }
0901.1782
1494256758
We investigate the problem of spreading information contents in a wireless ad hoc network with mechanisms embracing the peer-to-peer paradigm. In our vision, information dissemination should satisfy the following requirements: (i) it conforms to a predefined distribution and (ii) it is evenly and fairly carried by all nodes in their turn. In this paper, we observe the dissemination effects when the information moves across nodes according to two well-known mobility models, namely random walk and random direction. Our approach is fully distributed and comes at a very low cost in terms of protocol overhead; in addition, simulation results show that the proposed solution can achieve the aforementioned goals under different network scenarios, provided that a sufficient number of information replicas are injected into the network. This observation calls for a further step: in the realistic case where the user content demand varies over time, we need a content replication drop strategy to adapt the number of information replicas to the changes in the information query rate. We therefore devise a distributed, lightweight scheme that performs efficiently in a variety of scenarios.
Again in the context of sensor networks, approaches based on active queries following a trajectory through the network, or agents propagating information on local events have been proposed, respectively, in @cite_15 and @cite_3 . Note that both these works focus on the forwarding of these messages through the network, while our scope is to make the desired information available by letting it move through nodes caches.
{ "cite_N": [ "@cite_15", "@cite_3" ], "mid": [ "2047977378", "2136982639" ], "abstract": [ "While sensor networks are going to be deployed in diverse application specific contexts, one unifying view is to treat them essentially as distributed databases. The simplest mechanism to obtain information from this kind of a database is to flood queries for named data within the network and obtain the relevant responses from sources. However, if the queries are (a) complex, (b) one-shot, and (c) for replicated data, this simple approach can be highly inefficient. In the context of energy-starved sensor networks, alternative strategies need to be examined for such queries. We propose a novel and efficient mechanism for obtaining information in sensor networks which we refer to as ACtive QUery forwarding In sensoR nEtworks (ACQUIRE). The basic principle behind ACQUIRE is to consider the query as an active entity that is forwarded through the network (either randomly or in some directed manner) in search of the solution. ACQUIRE also incorporates a look-ahead parameter d in the following manner: intermediate nodes that handle the active query use information from all nodes within d hops in order to partially resolve the query. When the active query is fully resolved, a completed response is sent directly back to the querying node. We take a mathematical modelling approach in this paper to calculate the energy costs associated with ACQUIRE. The models permit us to characterize analytically the impact of critical parameters, and compare the performance of ACQUIRE with respect to other schemes such as flooding-based querying (FBQ) and expanding ring search (ERS), in terms of energy usage, response latency and storage requirements. We show that with optimal parameter settings, depending on the update frequency, ACQUIRE obtains order of magnitude reduction over FBQ and potentially over 60–75 reduction over ERS (in highly dynamic environments and high query rates) in consumed energy. We show that these energy savings are provided in trade for increased response latency. The mathematical analysis is validated through extensive simulations. � 2003 Elsevier B.V. All rights reserved.", "in micro-sensor and radio technology will enable small but smart sensors to be deployed for a wide range of environmental monitoring applications. In order to constrain communication overhead, dense sensor networks call for new and highly efficient methods for distributing queries to nodes that have observed interesting events in the network. A highly efficient data-centric routing mechanism will offer significant power cost reductions (17), and improve network longevity. Moreover, because of the large amount of system and data redundancy possible, data becomes disassociated from specific node and resides in regions of the network (10)(7)(8). This paper describes and evaluates through simulation a scheme we call Rumor Routing, which allows for queries to be delivered to events in the network. Rumor Routing is tunable, and allows for tradeoffs between setup overhead and delivery reliability. It's intended for contexts in which geographic routing criteria are not applicable because a coordinate system is not available or the phenomenon of interest is not geographically correlated." ] }
0901.1853
2951410353
In this work we consider the communication of information in the presence of a causal adversarial jammer. In the setting under study, a sender wishes to communicate a message to a receiver by transmitting a codeword x=(x_1,...,x_n) bit-by-bit over a communication channel. The adversarial jammer can view the transmitted bits x_i one at a time, and can change up to a p-fraction of them. However, the decisions of the jammer must be made in an online or causal manner. Namely, for each bit x_i the jammer's decision on whether to corrupt it or not (and on how to change it) must depend only on x_j for j <= i. This is in contrast to the "classical" adversarial jammer which may base its decisions on its complete knowledge of x. We present a non-trivial upper bound on the amount of information that can be communicated. We show that the achievable rate can be asymptotically no greater than min 1-H(p),(1-4p)^+ . Here H(.) is the binary entropy function, and (1-4p)^+ equals 1-4p for p < 0.25, and 0 otherwise.
To the best of our knowledge, communication in the presence of a causal adversary has not been explicitly addressed in the literature (other than our prior work for causal adversaries over large- @math channels). Nevertheless, we note that the model of causal channels, being a natural one, has been on the table'' for several decades and the analysis of the online causal channel model appears as an open question in the book of Csisz ' a r and Korner @cite_3 (in the section addressing Arbitrary Varying Channels @cite_11 ). Various variants of causal adversaries have been addressed in the past, for instance @cite_11 @cite_10 @cite_16 @cite_4 @cite_6 -- however the models considered therein differ significantly from ours.
{ "cite_N": [ "@cite_4", "@cite_3", "@cite_6", "@cite_16", "@cite_10", "@cite_11" ], "mid": [ "144396020", "1549664537", "2144630153", "2159535726", "2075714087", "1989102762" ], "abstract": [ "Robust and adaptive communication under uncertain interference by Anand Dilip Sarwate Doctor of Philosophy in Engineering—Electrical Engineering and Computer Sciences and the Designated Emphasis in Communication, Computation, and Statistics University of California, Berkeley Professor Michael Gastpar, Chair In the future, wireless communication systems will play an increasingly integral role in society. Cutting-edge application areas such as cognitive radio, ad-hoc networks, and sensor networks are changing the way we think about wireless services. The demand for ubiquitous communication and computing requires flexible communication protocols that can operate in a range of conditions. This thesis adopts and extends a mathematical model for these communication systems that accounts for uncertainty and time variation in link qualities. The arbitrarily varying channel (AVC) is an information theoretic channel model that has a time varying state with no statistical description. We assume the state is chosen by an adversarial jammer, reflecting the demand that our constructions work for all state sequences. In this thesis we show how resources such as secret keys, feedback, and side-information can help communication under this kind of uncertainty. In order to put our results in context we provide a detailed taxonomy of the known results on AVCs in a unified setting. We then prove new results on list decoding", "Csiszr and Krner's book is widely regarded as a classic in the field of information theory, providing deep insights and expert treatment of the key theoretical issues. It includes in-depth coverage of the mathematics of reliable information transmission, both in two-terminal and multi-terminal network scenarios. Updated and considerably expanded, this new edition presents unique discussions of information theoretic secrecy and of zero-error information theory, including the deep connections of the latter with extremal combinatorics. The presentations of all core subjects are self contained, even the advanced topics, which helps readers to understand the important connections between seemingly different problems. Finally, 320 end-of-chapter problems, together with helpful solving hints, allow readers to develop a full command of the mathematical techniques. It is an ideal resource for graduate students and researchers in electrical and electronic engineering, computer science and applied mathematics.", "In a recent paper, , presented a distributed polynomial-time rate-optimal network-coding scheme that works in the presence of Byzantine faults.We revisit their adversarial models and augment them with three, arguably realistic, models. In each of the models, we present a distributed scheme that demonstrates the usefulness of the model. In particular, all of the schemes obtain optimal rate C-z, where C is the network capacity and z is a bound on the number of links controlled by the adversary.", "In this paper, we review how Shannon's classical notion of capacity is not enough to characterize a noisy communication channel if the channel is intended to be used as part of a feedback loop to stabilize an unstable scalar linear system. While classical capacity is not enough, another sense of capacity (parametrized by reliability) called \"anytime capacity\" is necessary for the stabilization of an unstable process. The required rate is given by the log of the unstable system gain and the required reliability comes from the sense of stability desired. A consequence of this necessity result is a sequential generalization of the Schalkwijk-Kailath scheme for communication over the additive white Gaussian noise (AWGN) channel with feedback. In cases of sufficiently rich information patterns between the encoder and decoder, adequate anytime capacity is also shown to be sufficient for there to exist a stabilizing controller. These sufficiency results are then generalized to cases with noisy observations, delayed control actions, and without any explicit feedback between the observer and the controller. Both necessary and sufficient conditions are extended to continuous time systems as well. We close with comments discussing a hierarchy of difficulty for communication problems and how these results establish where stabilization problems sit in that hierarchy", "We design codes to transmit information over a network, some subset of which is controlled by a malicious adversary. The computationally unbounded, hidden adversary knows the message to be transmitted, and can observe and change information over the part of the network being controlled. The network nodes do not share resources such as shared randomness or a private key. We first consider a unicast problem in a network with |epsiv parallel, unit-capacity, directed edges. The rate-region has two parts. If the adversary controls a fraction p < 0.5 of the |epsiv edges, the maximal throughput equals (1 - p) |epsiv|. We describe low-complexity codes that achieve this rate-region. We then extend these results to investigate more general multicast problems in directed, acyclic networks", "A hang glider which relies upon ground effect forces in order to dynamically support a person suspended therefrom." ] }
0901.1853
2951410353
In this work we consider the communication of information in the presence of a causal adversarial jammer. In the setting under study, a sender wishes to communicate a message to a receiver by transmitting a codeword x=(x_1,...,x_n) bit-by-bit over a communication channel. The adversarial jammer can view the transmitted bits x_i one at a time, and can change up to a p-fraction of them. However, the decisions of the jammer must be made in an online or causal manner. Namely, for each bit x_i the jammer's decision on whether to corrupt it or not (and on how to change it) must depend only on x_j for j <= i. This is in contrast to the "classical" adversarial jammer which may base its decisions on its complete knowledge of x. We present a non-trivial upper bound on the amount of information that can be communicated. We show that the achievable rate can be asymptotically no greater than min 1-H(p),(1-4p)^+ . Here H(.) is the binary entropy function, and (1-4p)^+ equals 1-4p for p < 0.25, and 0 otherwise.
We note that under a very weak notion of capacity in which one only requires the success probability to be bounded away from zero (instead of approaching @math ), the capacity of the omniscient channel, and thus the binary causal-adversary channel, approaches @math . This follows by the fact that for @math sufficiently large and @math there exists @math codes which are @math list decodable with @math @cite_9 . Communicating using an @math list decodable code allows Bob to decode a list of size @math of messages which includes the message transmitted by Alice. Choosing a message uniformly at random from his list, Bob decodes correctly with probability at least @math .
{ "cite_N": [ "@cite_9" ], "mid": [ "2162725776" ], "abstract": [ "In the list-of-L decoding of a block code the receiver of a noisy sequence lists L possible transmitted messages, and is in error only if the correct message is not on the list. Consideration is given to (n,e,L) codes, which correct all sets of e or fewer errors in a block of n bits under list-of-L decoding. New geometric relations between the number of errors corrected under list-of-1 decoding and the (larger) number corrected under list-of-L decoding of the same code lead to new lower bounds on the maximum rate of (n,e,L) codes. They show that a jammer who can change a fixed fraction p >" ] }
0901.2730
2951327449
In this paper, we present a novel and general framework called Maximum Entropy Discrimination Markov Networks (MaxEnDNet), which integrates the max-margin structured learning and Bayesian-style estimation and combines and extends their merits. Major innovations of this model include: 1) It generalizes the extant Markov network prediction rule based on a point estimator of weights to a Bayesian-style estimator that integrates over a learned distribution of the weights. 2) It extends the conventional max-entropy discrimination learning of classification rule to a new structural max-entropy discrimination paradigm of learning the distribution of Markov networks. 3) It subsumes the well-known and powerful Maximum Margin Markov network (M @math N) as a special case, and leads to a model similar to an @math -regularized M @math N that is simultaneously primal and dual sparse, or other types of Markov network by plugging in different prior distributions of the weights. 4) It offers a simple inference algorithm that combines existing variational inference and convex-optimization based M @math N solvers as subroutines. 5) It offers a PAC-Bayesian style generalization bound. This work represents the first successful attempt to combine Bayesian-style learning (based on generative models) with structured maximum margin learning (based on a discriminative model), and outperforms a wide array of competing methods for structured input output learning on both synthetic and real data sets.
Although the parameter distribution @math in Theorem has a similar form as that of the Bayesian Conditional Random Fields (BCRFs) , MaxEnDNet is fundamentally different from BCRFs as we have stated. @cite_2 present an interesting confidence-weighted linear classification method, which automatically estimates the mean and variance of model parameters in online learning. The procedure is similar to (but indeed different from) our variational Bayesian method of Laplace MaxEnDNet.
{ "cite_N": [ "@cite_2" ], "mid": [ "1648445109" ], "abstract": [ "This paper introduces a general Bayesian framework for obtaining sparse solutions to regression and classification tasks utilising models linear in the parameters. Although this framework is fully general, we illustrate our approach with a particular specialisation that we denote the 'relevance vector machine' (RVM), a model of identical functional form to the popular and state-of-the-art 'support vector machine' (SVM). We demonstrate that by exploiting a probabilistic Bayesian learning framework, we can derive accurate prediction models which typically utilise dramatically fewer basis functions than a comparable SVM while offering a number of additional advantages. These include the benefits of probabilistic predictions, automatic estimation of 'nuisance' parameters, and the facility to utilise arbitrary basis functions (e.g. non-'Mercer' kernels). We detail the Bayesian framework and associated learning algorithm for the RVM, and give some illustrative examples of its application along with some comparative benchmarks. We offer some explanation for the exceptional degree of sparsity obtained, and discuss and demonstrate some of the advantageous features, and potential extensions, of Bayesian relevance learning." ] }
0901.0339
1946152203
We address the problem of semantic querying of relational databases (RDB) modulo knowledge bases using very expressive knowledge representation formalisms, such as full first-order logic or its various fragments. We propose to use a first-order logic (FOL) reasoner for computing schematic answers to deductive queries, with the subsequent instantiation of these schematic answers using a conventional relational DBMS. In this research note, we outline the main idea of this technique -- using abstractions of databases and constrained clauses for deriving schematic answers. The proposed method can be directly used with regular RDB, including legacy databases. Moreover, we propose it as a potential basis for an efficient Web-scale semantic search technology.
In general, semantic access to relational databases is not a new concept. Some of the work on this topic is limited to semantic access to, or semantic interpretation of relational data in terms of Description Logic-based ontologies or RDF (see, e. g., @cite_9 @cite_27 @cite_28 ), or non-logical semantic schemas (see @cite_11 ). There is also a large number of projects and publications on the use of RDB for storing and querying large RDF and OWL datasets: see, e. g., @cite_19 @cite_0 @cite_29 @cite_18 @cite_12 , to mention just a few. The format of the research note does not allow us to give a comprehensive overview of such work, so we will concentrate on research that tries to go beyond the expressivity of DL and, at the same time, is applicable to legacy relational databases.
{ "cite_N": [ "@cite_18", "@cite_28", "@cite_9", "@cite_29", "@cite_0", "@cite_19", "@cite_27", "@cite_12", "@cite_11" ], "mid": [ "", "2166306255", "122713538", "", "25697800", "1603311785", "2162015762", "94495030", "" ], "abstract": [ "", "Relational databases are widely used today as a mechanism for providing access to structured data. They, however, are not suitable for typical information finding tasks of end users. There is often a semantic gap between the queries users want to express and the queries that can be answered by the database. In this paper, we propose a system that bridges this semantic gap using domain knowledge contained in ontologies. Our system extends relational databases with the ability to answer semantic queries that are represented in SPARQL, an emerging Semantic Web query language. Users express their queries in SPARQL, based on a semantic model of the data, and they get back semantically relevant results. We define different categories of results that are semantically relevant to the users' query and show how our system retrieves these results. We evaluate the performance of our system on sample relational databases, using a combination of standard and custom ontologies.", "The goal of data integration is to provide a uniform access to a set of heterogeneous data sources, freeing the user from the knowledge about where the data are, how they are stored, and how they can be accessed. The problem of designing effective data integration solutions has been addressed by several research and development projects in the last years. One of the outcomes of this research work is a clear conceptual architecture for data integration1. According to this architecture [9], the main components of a data integration system are the global schema, the sources, and the mapping. Thus, a data integration system is seen as a triple 〈G,S,M〉, where:", "", "We propose a new Description Logic, called DL-Lite, specifically tailored to capture basic ontology languages, while keeping low complexity of reasoning. Reasoning here means not only computing subsumption between concepts, and checking satisfiability of the whole knowledge base, but also answering complex queries (in particular, conjunctive queries) over the set of instances maintained in secondary storage. We show that in DL-Lite the usual DL reasoning tasks are polynomial in the size of the TBox, and query answering is polynomial in the size of the ABox (i.e., in data complexity). To the best of our knowledge, this is the first result of polynomial data complexity for query answering over DL knowledge bases. A notable feature of our logic is to allow for a separation between TBox and ABox reasoning during query evaluation: the part of the process requiring TBox reasoning is independent of the ABox, and the part of the process requiring access to the ABox can be carried out by an SQL engine, thus taking advantage of the query optimization strategies provided by current DBMSs.", "Abstract : We present DLDB, a knowledge base system that extends a relational database management system with additional capabilities for DAML+OIL inference. We discuss a number of database schemas that can be used to store RDF data and discuss the tradeoffs of each. Then we describe how we extend our design to support DAML+OIL entailments. The most significant aspect of our approach is the use of a description logic reasoner to precompute the subsumption hierarchy. We describe a lightweight implementation that makes use of a common RDBMS (MS Access) and the FaCT description logic reasoner. Surprisingly, this simple approach provides good results for extensional queries over a large set of DAML+OIL data that commits to a representative ontology of moderate complexity. As such, we expect such systems to be adequate for personal or small-business usage.", "Ontologies are a crucial tool for formally specifying the vocabulary and relationship of concepts used on the Semantic Web. In order to share information, agents that use different vocabularies must be able to translate data from one ontological framework to another. Ontology translation is required when translating datasets, generating ontology extensions, and querying through different ontologies. OntoMerge, an online system for ontology merging and automated reasoning, can implement ontology translation with inputs and outputs in OWL or other web languages. Ontology translation can be thought of in terms of formal inference in a merged ontology. The merge of two related ontologies is obtained by taking the union of the concepts and the axioms defining them, and then adding bridging axioms that relate their concepts. The resulting merged ontology then serves as an inferential medium within which translation can occur. Our internal representation, Web-PDDL, is a strong typed first-order logic language for web application. Using a uniform notation for all problems allows us to factor out syntactic and semantic translation problems, and focus on the latter. Syntactic translation is done by an automatic translator between Web-PDDL and OWL or other web languages. Semantic translation is implemented using an inference engine (OntoEngine) which processes assertions and queries in Web-PDDL syntax, running in either a data-driven (forward chaining) or demand-driven (backward chaining) way.", "Recently, several approaches have been proposed on combining description logic (DL) reasoning with database techniques. In this paper we report on the LAS (Large Abox Store) system extending the DL reasoner Racer with a database used to store and query Tbox and Abox information. LAS stores for given knowledge bases their taxonomy and their complete Abox in its database. The Aboxes may contain role assertions. LAS can answer Tbox and Abox queries by combining SQL queries with DL reasoning. The architecture of LAS is based on merging techniques for so-called individual pseudo models.", "" ] }
0901.0339
1946152203
We address the problem of semantic querying of relational databases (RDB) modulo knowledge bases using very expressive knowledge representation formalisms, such as full first-order logic or its various fragments. We propose to use a first-order logic (FOL) reasoner for computing schematic answers to deductive queries, with the subsequent instantiation of these schematic answers using a conventional relational DBMS. In this research note, we outline the main idea of this technique -- using abstractions of databases and constrained clauses for deriving schematic answers. The proposed method can be directly used with regular RDB, including legacy databases. Moreover, we propose it as a potential basis for an efficient Web-scale semantic search technology.
The work presented here was originally inspired by the XSTONE project @cite_2 . In XSTONE, a resolution-based theorem prover (a reimplementation of Gandalf, which is, in particular, optimised for taxonomic reasoning) is integrated with an RDBMS by loading rows from a database as ground facts into the reasoner and using them to answer queries with resolution. The system is highly scalable in terms of expressiveness: it accepts full FOL with some useful extensions, and also has parsers for RDF, RDFS and OWL. We believe that our approach has better data scalability and can cope with very large databases which are beyond the reach of XSTONE, mostly because our approach obtains answers in bulk, and also due to the way we use highly-optimised RDBMS.
{ "cite_N": [ "@cite_2" ], "mid": [ "2096144498" ], "abstract": [ "Summary: We describe multiple methods for accessing and querying the complex and integrated cellular data in the BioCyc family of databases: access through multiple file formats, access through Application Program Interfaces (APIs) for LISP, Perl and Java, and SQL access through the BioWarehouse relational database. Availability: The Pathway Tools software and 20 BioCyc DBs in Tiers 1 and 2 are freely available to academic users; fees apply to some types of commercial use. For download instructions see http: BioCyc.org download.shtml Supplementary information: For more details on programmatic access to BioCyc DBs, see http: bioinformatics.ai.sri.com ptools ptools-resources.html Contact: [email protected]" ] }
0901.0339
1946152203
We address the problem of semantic querying of relational databases (RDB) modulo knowledge bases using very expressive knowledge representation formalisms, such as full first-order logic or its various fragments. We propose to use a first-order logic (FOL) reasoner for computing schematic answers to deductive queries, with the subsequent instantiation of these schematic answers using a conventional relational DBMS. In this research note, we outline the main idea of this technique -- using abstractions of databases and constrained clauses for deriving schematic answers. The proposed method can be directly used with regular RDB, including legacy databases. Moreover, we propose it as a potential basis for an efficient Web-scale semantic search technology.
On the more theoretical side, it is necessary to mention two other connections. The idea of using constraints to represent schematic answers is borrowed from Constraint Logic Programming @cite_20 and Constrained Resolution @cite_17 . Also, the general idea of using reasoning for preprocessing expressive queries into a database-related formalism, was borrowed from @cite_10 , where a resolution- and paramodulation-based calculus is used to translate expressive DL ontologies into Disjunctive Datalog. This work also shares a starting point with ours -- the observation that reasoning methods that treat individuals data values separately can not scale up sufficiently.
{ "cite_N": [ "@cite_10", "@cite_20", "@cite_17" ], "mid": [ "", "2063727779", "1598593580" ], "abstract": [ "", "Abstract Constraint Logic Programming (CLP) is a merger of two declarative paradigms: constraint solving and logic programming. Although a relatively new field, CLP has progressed in several quite different directions. In particular, the early fundamental concepts have been adapted to better serve in different areas of applications. In this survey of CLP, a primary goal is to give a systematic description of the major trends in terms of common fundamental concepts. The three main parts cover the theory, implementation issues, and programming for applications.", "Recently, extensions of constrained logic programming and constrained resolution for theorem proving have been introduced, that consider constraints, which are interpreted under an open world assumption. We discuss relationships between applications of these approaches for query answering in knowledge base systems on the one hand and abduction-based hypothetical reasoning on the other hand. We show both that constrained resolution can be used as an operationalization of (some limited form of) abduction and that abduction is the logical status of an answer generation process through constrained resolution, ie., it is an abductive but not a deductive form of reasoning." ] }
0812.4983
1485891830
To establish secure (point-to-point and or broadcast) communication channels among the nodes of a wireless sensor network is a fundamental task. To this end, a plethora of (socalled) key pre-distribution schemes have been proposed in the past. All these schemes, however, rely on shared secret(s), which are assumed to be somehow pre-loaded onto the sensor nodes. In this paper, we propose a novel method for secure initialization of sensor nodes based on a visual out-of-band channel. Using the proposed method, the administrator of a sensor network can distribute keys onto the sensor nodes, necessary to bootstrap key pre-distribution. Our secure initialization method requires only a little extra cost, is efficient and scalable with respect to the number of sensor nodes. Moreover, based on a usability study that we conducted, the method turns out to be quite user-friendly and easy to use by naive human users.
The problem of secure sensor node initialization has been considered only recently. Prior to MiB method of @cite_15 (which we reviewed in the previous section), the following schemes were proposed. The Shake-them-up'' @cite_5 scheme suggests a simple manual technique for pairing two sensor nodes that involves shaking and twirling them in very close proximity to each other, in order to prevent eavesdropping. While being shaken, two sensor nodes exchange packets and agree on a key one bit at a time, relying on the adversary's inability to determine the sending node. However, it turns out that the sender can be identified using radio fingerprinting @cite_14 and the security of this scheme is uncertain.
{ "cite_N": [ "@cite_5", "@cite_15", "@cite_14" ], "mid": [ "2013405390", "", "1985671681" ], "abstract": [ "This paper presents a new pairing protocol that allows twoCPU-constrained wireless devices Alice and Bob to establish ashared secret at a very low cost. To our knowledge, this is thefirst software pairing scheme that does not rely on expensivepublic-key cryptography, out-of-band channels (such as a keyboardor a display) or specific hardware, making it inexpensive andsuitable for CPU-constrained devices such as sensors. In the described protocol, Alice can send the secret bit 1 toBob by broadcasting an (empty) packet with the source field set toAlice. Similarly, Alice can send the secret bit 0 to Bob bybroadcasting an (empty) packet with the source field set to Bob.Only Bob can identify the real source of the packet (since it didnot send it, the source is Alice), and can recover the secret bit(1 if the source is set to Alice or 0 otherwise). An eavesdroppercannot retrieve the secret bit since it cannot figure out whetherthe packet was actually sent by Alice or Bob. By randomlygenerating n such packets Alice and Bob can agree on ann-bit secret key. Our scheme requires that the devices being paired, Alice andBob, are shaken during the key exchange protocol. This is toguarantee that an eavesdropper cannot identify the packets sent byAlice from those sent by Bob using data from the RSSI (ReceivedSignal Strength Indicator) registers available in commercialwireless cards. The proposed protocol works with off-the-shelf802.11 wireless cards and is secure against eavesdropping attacksthat use power analysis. It requires, however, some firmwarechanges to protect against attacks that attempt to identify thesource of packets from their transmission frequency.", "", "We demonstrate the feasibility of finger-printing the radio of wireless sensor nodes (Chipcon 1000 radio, 433MHz). We show that, with this type of devices, a receiver can create device radio finger-prints and subsequently identify origins of messages exchanged between the devices, even if message contents and device identifiers are hidden. We further analyze the implications of device fingerprinting on the security of sensor networking protocols, specifically, we propose two new mechanisms for the detection of wormholes in sensor networks." ] }
0812.4983
1485891830
To establish secure (point-to-point and or broadcast) communication channels among the nodes of a wireless sensor network is a fundamental task. To this end, a plethora of (socalled) key pre-distribution schemes have been proposed in the past. All these schemes, however, rely on shared secret(s), which are assumed to be somehow pre-loaded onto the sensor nodes. In this paper, we propose a novel method for secure initialization of sensor nodes based on a visual out-of-band channel. Using the proposed method, the administrator of a sensor network can distribute keys onto the sensor nodes, necessary to bootstrap key pre-distribution. Our secure initialization method requires only a little extra cost, is efficient and scalable with respect to the number of sensor nodes. Moreover, based on a usability study that we conducted, the method turns out to be quite user-friendly and easy to use by naive human users.
The initialization method that we propose in this paper is similar to the device pairing schemes that use an OOB channel. Thus, we also review most relevant device pairing methods and argue whether or not they can be extended for the application of sensor node initialization. In their seminal work, Stajano and Anderson @cite_24 proposed to establish a shared secret between two devices using a link created through a physical contact (such as an electric cable). As pointed out previously, this approach requires interfaces not available on most sensor motes. Moreover, the approach would be unscalable.
{ "cite_N": [ "@cite_24" ], "mid": [ "2099042427" ], "abstract": [ "In the near future, many personal electronic devices will be able to communicate with each other over a short range wireless channel. We investigate the principal security issues for such an environment. Our discussion is based on the concrete example of a thermometer that makes its readings available to other nodes over the air. Some lessons learned from this example appear to be quite general to ad-hoc networks, and rather different from what we have come to expect in more conventional systems: denial of service, the goals of authentication, and the problems of naming all need re-examination. We present the resurrecting duckling security policy model, which describes secure transient association of a device with multiple serialised owners." ] }
0812.4983
1485891830
To establish secure (point-to-point and or broadcast) communication channels among the nodes of a wireless sensor network is a fundamental task. To this end, a plethora of (socalled) key pre-distribution schemes have been proposed in the past. All these schemes, however, rely on shared secret(s), which are assumed to be somehow pre-loaded onto the sensor nodes. In this paper, we propose a novel method for secure initialization of sensor nodes based on a visual out-of-band channel. Using the proposed method, the administrator of a sensor network can distribute keys onto the sensor nodes, necessary to bootstrap key pre-distribution. Our secure initialization method requires only a little extra cost, is efficient and scalable with respect to the number of sensor nodes. Moreover, based on a usability study that we conducted, the method turns out to be quite user-friendly and easy to use by naive human users.
Balfanz, et al @cite_23 extended the above approach through the use of infrared as an OOB channel -- the devices exchange their public keys over the wireless channel followed by exchanging (at least @math -bits long) hashes of their respective public keys over infrared. Most sensor motes do not possess infrared transmitters. Also, infrared is not easily perceptible by humans. Based on the protocol of @cite_23 , proposed the Seeing-is-Believing'' (SiB) scheme @cite_20 . SiB involves establishing two unidirectional visual OOB channels -- one device encodes the data into a two-dimensional barcode and the other device reads it using a photo camera. To apply SiB for sensor node initialization, one would need to affix a static barcode (during the manufacturing phase) on each sensor node, which can be captured by a camera on the sink node. However, this will only provide unidirectional authentication, since the sensor nodes can not afford to have a camera each. Note that it will also not be possible to manually input on each sensor node the hash of the public key of the sink, since most sensor nodes do not possess keypads and even if they do, this will not scale.
{ "cite_N": [ "@cite_20", "@cite_23" ], "mid": [ "2116897550", "1542316315" ], "abstract": [ "Current mechanisms for authenticating communication between devices that share no prior context are inconvenient for ordinary users, without the assistance of a trusted authority. We present and analyze seeing-is-believing, a system that utilizes 2D barcodes and camera-telephones to implement a visual channel for authentication and demonstrative identification of devices. We apply this visual channel to several problems in computer security, including authenticated key exchange between devices that share no prior context, establishment of a trusted path for configuration of a TCG-compliant computing platform, and secure device configuration in the context of a smart home.", "In this paper we address the problem of secure communication and authentication in ad-hoc wireless networks. This is a difficult problem, as it involves bootstrapping trust between strangers. We present a user-friendly solution, which provides secure authentication using almost any established public-key-based key exchange protocol, as well as inexpensive hash-based alternatives. In our approach, devices exchange a limited amount of public information over a privileged side channel, which will then allow them to complete an authenticated key exchange protocol over the wireless link. Our solution does not require a public key infrastructure, is secure against passive attacks on the privileged side channel and all attacks on the wireless link, and directly captures users’ intuitions that they want to talk to a particular previously unknown device in their physical proximity. We have implemented our system in Java for a variety of different devices, communication media, and key" ] }
0812.4983
1485891830
To establish secure (point-to-point and or broadcast) communication channels among the nodes of a wireless sensor network is a fundamental task. To this end, a plethora of (socalled) key pre-distribution schemes have been proposed in the past. All these schemes, however, rely on shared secret(s), which are assumed to be somehow pre-loaded onto the sensor nodes. In this paper, we propose a novel method for secure initialization of sensor nodes based on a visual out-of-band channel. Using the proposed method, the administrator of a sensor network can distribute keys onto the sensor nodes, necessary to bootstrap key pre-distribution. Our secure initialization method requires only a little extra cost, is efficient and scalable with respect to the number of sensor nodes. Moreover, based on a usability study that we conducted, the method turns out to be quite user-friendly and easy to use by naive human users.
@cite_16 proposed a new scheme based on visual OOB channel. The scheme uses one of the protocols based on Short Authenticated Strings (SAS) @cite_6 , @cite_4 , and is aimed at pairing two devices (such as a cell phone and an access point), only one of which has a relevant receiver (such as a camera). The protocol is depicted in Figure and as we will see in the next section, this is the protocol that we utilize in our proposal. In this paper, we extend the above scheme to a many-to-one'' setting applicable to key distribution in sensor networks. Basically, the novel OOB channel that we build consists of multiple devices blinking their SAS data simultaneously, which is captured using a camera connected to the sink.
{ "cite_N": [ "@cite_16", "@cite_4", "@cite_6" ], "mid": [ "2001619264", "", "1686853624" ], "abstract": [ "Recently several researchers and practitioners have begun to address the problem of how to set up secure communication between two devices without the assistance of a trusted third party. , (2005) proposed that one device displays the hash of its public key in the form of a barcode, and the other device reads it using a camera. Mutual authentication requires switching the roles of the devices and repeating the above process in the reverse direction. In this paper, we show how strong mutual authentication can be achieved even with a unidirectional visual channel, without having to switch device roles. By adopting recently proposed improved pairing protocols, we propose how visual channel authentication can be used even on devices that have very limited displaying capabilities.", "", "Key agreement protocols are frequently based on the Diffie-Hellman protocol but require authenticating the protocol messages in two ways. This can be made by a cross-authentication protocol. Such protocols, based on the assumption that a channel which can authenticate short strings is available (SAS-based), have been proposed by Vaudenay. In this paper, we survey existing protocols and we propose a new one. Our proposed protocol requires three moves and a single SAS to be authenticated in two ways. It is provably secure in the random oracle model. We can further achieve security with a generic construction (e.g. in the standard model) at the price of an extra move. We discuss applications such as secure peer-to-peer VoIP." ] }
0812.5064
2106011017
This paper introduces a model based upon games on an evolving network, and develops three clustering algorithms according to it. In the clustering algorithms, data points for clustering are regarded as players who can make decisions in games. On the network describing relationships among data points, an edge-removing-and-rewiring (ERR) function is employed to explore in a neighborhood of a data point, which removes edges connecting to neighbors with small payoffs, and creates new edges to neighbors with larger payoffs. As such, the connections among data points vary over time. During the evolution of network, some strategies are spread in the network. As a consequence, clusters are formed automatically, in which data points with the same evolutionarily stable strategy are collected as a cluster, so the number of evolutionarily stable strategies indicates the number of clusters. Moreover, the experimental results have demonstrated that data points in datasets are clustered reasonably and efficiently, and the comparison with other algorithms also provides an indication of the effectiveness of the proposed algorithms.
Evolutionary game theory, which combines the traditional game theory with the idea of evolution, is based on the assumption of bounded rationality. On the contrary, in classical game theory players are supposed to be perfectly rational or hyper-rational, and always choose optimal strategies in complex environments. Finite information and cognitive limitations, however, often make rational decisions inaccessible. Besides, perfect rationality may cause the so-called backward induction paradox @cite_12 in finitely repeated games. On the other hand, as the relaxation of perfect rationality in classical game theory, bounded rationality means people in games need only part rationality @cite_17 , which explains why in many cases people respond or play instinctively according to heuristic rules and social norms rather than adopting the strategies indicated by rational game theory @cite_7 . So, various dynamic rules can be defined to characterize the boundedly rational behavior of players in evolutionary game theory.
{ "cite_N": [ "@cite_7", "@cite_12", "@cite_17" ], "mid": [ "1992195122", "2145922343", "1688243677" ], "abstract": [ "Abstract Game theory is one of the key paradigms behind many scientific disciplines from biology to behavioral sciences to economics. In its evolutionary form and especially when the interacting agents are linked in a specific social network the underlying solution concepts and methods are very similar to those applied in non-equilibrium statistical physics. This review gives a tutorial-type overview of the field for physicists. The first four sections introduce the necessary background in classical and evolutionary game theory from the basic definitions to the most important results. The fifth section surveys the topological complications implied by non-mean-field-type social network structures in general. The next three sections discuss in detail the dynamic behavior of three prominent classes of models: the Prisoner's Dilemma, the Rock–Scissors–Paper game, and Competing Associations. The major theme of the review is in what sense and how the graph structure of interactions can modify and enrich the picture of long term behavioral patterns emerging in evolutionary games.", "Les AA. montrent que le paradoxe de l'induction retrograde (backward induction) est resoluble. Une solution repose sur le fait que les joueurs (ou agents) rationnels ne sont pas necessairement en position d'utiliser l'argument d'induction retrograde", "Part I Journey to a 21st birthday: the boy in Wisconsin forests and fields education in Chicago encounter with a scientific revolution - political science at Chicago. Part II The scientist as a young man: a taste of research - the City Managers' Association managing research - Berkeley teaching at Illinois Tech a matter of loyalty building a business school - the Graduate School of Industrial Administration research and science politics mazes without minotaurs roots of artificial intelligence climbing the mountain - artificial intelligence achieved. Part III View from the mountain: exploring the plain personal threads in the warp creating a university environment for cognitive science and A.I. on being argumentative the student troubles the scientist as politician foreign adventures. Part IV Research after 60: from Nobel to now the amateur diplomat in China and the Soviet Union guides for choice. Afterword: the scientist as problem solver." ] }
0812.5064
2106011017
This paper introduces a model based upon games on an evolving network, and develops three clustering algorithms according to it. In the clustering algorithms, data points for clustering are regarded as players who can make decisions in games. On the network describing relationships among data points, an edge-removing-and-rewiring (ERR) function is employed to explore in a neighborhood of a data point, which removes edges connecting to neighbors with small payoffs, and creates new edges to neighbors with larger payoffs. As such, the connections among data points vary over time. During the evolution of network, some strategies are spread in the network. As a consequence, clusters are formed automatically, in which data points with the same evolutionarily stable strategy are collected as a cluster, so the number of evolutionarily stable strategies indicates the number of clusters. Moreover, the experimental results have demonstrated that data points in datasets are clustered reasonably and efficiently, and the comparison with other algorithms also provides an indication of the effectiveness of the proposed algorithms.
Evolutionary stability is a central concept in evolutionary game theory. In biological situations the evolutionary stability provides a robust criterion for strategies against natural selection. Furthermore, it also means that any small group of individuals who tries some alternative strategies gets lower payoffs than those who stick to the original strategy @cite_21 . Suppose that individuals in an infinite and homogenous population who play symmetric games with equal probability are randomly matched and all employ the same strategy @math . Nevertheless, if a small group of mutants with population share @math who plays some other strategy appear in the whole group of individuals, they will receive lower payoffs. Therefore, the strategy @math is said to be evolutionary stable for any mutant strategy @math , if and only if the inequality, @math , holds, where the function @math denotes the payoff for playing strategy @math against strategy @math @cite_11 .
{ "cite_N": [ "@cite_21", "@cite_11" ], "mid": [ "1945527616", "1968606957" ], "abstract": [ "This text introduces current evolutionary game theory--where ideas from evolutionary biology and rationalistic economics meet--emphasizing the links between static and dynamic approaches and noncooperative game theory. The author provides an overview of the developments that have taken place in this branch of game theory, discusses the mathematical tools needed to understand the area, describes both the motivation and intuition for the concepts involved, and explains why and how the theory is relevant to economics.", "A group of individuals resolve their disputes by a knockout tournament. In each round of the tournament, the remaining contestants form pairs which compete, the winners progressing to the next round and the losers being eliminated. The payoff received depends upon how far the player has progressed and a cost is incurred only when it is defeated. We only consider strategies in which individuals are constrained to adopt a fixed play throughout the successive rounds. The case where individuals can vary their choice of behaviour from round to round will be treated elsewhere. The complexity of the system is investigated and illustrated both by special cases and numerical examples." ] }
0812.5064
2106011017
This paper introduces a model based upon games on an evolving network, and develops three clustering algorithms according to it. In the clustering algorithms, data points for clustering are regarded as players who can make decisions in games. On the network describing relationships among data points, an edge-removing-and-rewiring (ERR) function is employed to explore in a neighborhood of a data point, which removes edges connecting to neighbors with small payoffs, and creates new edges to neighbors with larger payoffs. As such, the connections among data points vary over time. During the evolution of network, some strategies are spread in the network. As a consequence, clusters are formed automatically, in which data points with the same evolutionarily stable strategy are collected as a cluster, so the number of evolutionarily stable strategies indicates the number of clusters. Moreover, the experimental results have demonstrated that data points in datasets are clustered reasonably and efficiently, and the comparison with other algorithms also provides an indication of the effectiveness of the proposed algorithms.
In addition, the cooperation mechanism and spatial-temporal dynamics related to it have long been investigated within the framework of evolutionary game theory based on the prisoner's dilemma (PD) game or snowdrift game which models interactions between a pair of players. In early days, the iterated PD game was widely studied, in which a player interacted with all other players. By round robin interactions among players, strategies in the population began to evolve according to their payoffs. As a result, the strategy of unconditional defection was always evolutionary stable @cite_14 while pure cooperators could not survive. Nevertheless, the Tit-for-Tat strategy is evolutionary stable as well, which promotes cooperation based on reciprocity @cite_22 .
{ "cite_N": [ "@cite_14", "@cite_22" ], "mid": [ "2085728653", "2062663664" ], "abstract": [ "Every form of behavior is shaped by trial and error. Such stepwise adaptation can occur through individual learning or through natural selection, the basis of evolution. Since the work of Maynard Smith and others, it has been realized how game theory can model this process. Evolutionary game theory replaces the static solutions of classical game theory by a dynamical approach centered not on the concept of rational players but on the population dynamics of behavioral programs. In this book the authors investigate the nonlinear dynamics of the self-regulation of social and economic behavior, and of the closely related interactions among species in ecological communities. Replicator equations describe how successful strategies spread and thereby create new conditions that can alter the basis of their success, i.e., to enable us to understand the strategic and genetic foundations of the endless chronicle of invasions and extinctions that punctuate evolution. In short, evolutionary game theory describes when to escalate a conflict, how to elicit cooperation, why to expect a balance of the sexes, and how to understand natural selection in mathematical terms. Comprehensive treatment of ecological and game theoretic dynamics Invasion dynamics and permanence as key concepts Explanation in terms of games of things like competition between species", "Cooperation in organisms, whether bacteria or primates, has been a difficulty for evolutionary theory since Darwin. On the assumption that interactions between pairs of individuals occur on a probabilistic basis, a model is developed based on the concept of an evolutionarily stable strategy in the context of the Prisoner's Dilemma game. Deductions from the model, and the results of a computer tournament show how cooperation based on reciprocity can get started in an asocial world, can thrive while interacting with a wide range of other strategies, and can resist invasion once fully established. Potential applications include specific aspects of territoriality, mating, and disease." ] }
0812.3120
2061159444
Imperfect channel state information degrades the performance of multiple-input multiple-output (MIMO) communications; its effects on single-user (SU) and multiuser (MU) MIMO transmissions are quite different. In particular, MU-MIMO suffers from residual interuser interference due to imperfect channel state information while SU-MIMO only suffers from a power loss. This paper compares the throughput loss of both SU and MU-MIMO in the broadcast channel due to delay and channel quantization. Accurate closed-form approximations are derived for achievable rates for both SU and MU-MIMO. It is shown that SU-MIMO is relatively robust to delayed and quantized channel information, while MU-MIMO with zero-forcing precoding loses its spatial multiplexing gain with a fixed delay or fixed codebook size. Based on derived achievable rates, a mode switching algorithm is proposed, which switches between SU and MU-MIMO modes to improve the spectral efficiency based on average signal-to-noise ratio (SNR), normalized Doppler frequency, and the channel quantization codebook size. The operating regions for SU and MU modes with different delays and codebook sizes are determined, and they can be used to select the preferred mode. It is shown that the MU mode is active only when the normalized Doppler frequency is very small, and the codebook size is large.
For the MIMO downlink, CSIT is required to separate the spatial channels for different users. To obtain the full spatial multiplexing gain for the MU-MIMO system employing zero-forcing (ZF) or block-diagonalization (BD) precoding, it was shown in @cite_38 @cite_14 that the quantization codebook size for limited feedback needs to increase linearly with SNR (in dB) and the number of transmit antennas. Zero-forcing dirty-paper coding and channel inversion systems with limited feedback were investigated in @cite_9 , where a sum rate ceiling due to a fixed codebook size was derived for both schemes. In @cite_11 , it was shown that to exploit multiuser diversity for ZF, both channel direction and information about signal-to-interference-plus-noise ratio (SINR) must be fed back. More recently, a comprehensive study of the MIMO downlink with ZF precoding was done in @cite_6 , which considered downlink training and explicit channel feedback and concluded that significant downlink throughput is achievable with efficient CSI feedback. For a compound MIMO broadcast channel, the information theoretic analysis in @cite_35 showed that scaling the CSIT quality such that the CSIT error is dominated by the inverse of the SNR is both necessary and sufficient to achieve the full spatial multiplexing gain.
{ "cite_N": [ "@cite_38", "@cite_35", "@cite_14", "@cite_9", "@cite_6", "@cite_11" ], "mid": [ "2074120933", "2541048740", "2164290759", "2106872716", "347411582", "2124858630" ], "abstract": [ "Multiple transmit antennas in a downlink channel can provide tremendous capacity (i.e., multiplexing) gains, even when receivers have only single antennas. However, receiver and transmitter channel state information is generally required. In this correspondence, a system where each receiver has perfect channel knowledge, but the transmitter only receives quantized information regarding the channel instantiation is analyzed. The well-known zero-forcing transmission technique is considered, and simple expressions for the throughput degradation due to finite-rate feedback are derived. A key finding is that the feedback rate per mobile must be increased linearly with the signal-to-noise ratio (SNR) (in decibels) in order to achieve the full multiplexing gain. This is in sharp contrast to point-to-point multiple-input multiple-output (MIMO) systems, in which it is not necessary to increase the feedback rate as a function of the SNR", "A multiple antenna broadcast channel (multiple transmit antennas, one antenna at each receiver) with imperfect channel state information available to the transmitter is considered. If perfect channel state information is available to the transmitter, then a multiplexing gain equal to the minimum of the number of transmit antennas and the number of receivers is achievable. On the other hand, if each receiver has identical fading statistics and the transmitter has no channel information, the maximum achievable multiplexing gain is only one. The focus of this paper is on determination of necessary and sufficient conditions on the rate at which CSIT quality must improve with SNR in order for full multiplexing gain to be achievable. The main result of the paper shows that scaling CSIT quality such that the CSIT error is dominated by the inverse of the SNR is both necessary and sufficient to achieve the full multiplexing gain as well as a bounded rate offset (i.e., the sum rate has no negative sub-logarithmic terms) in the compound channel setting.", "Block diagonalization is a linear preceding technique for the multiple antenna broadcast (downlink) channel that involves transmission of multiple data streams to each receiver such that no multi-user interference is experienced at any of the receivers. This low-complexity scheme operates only a few dB away from capacity but requires very accurate channel knowledge at the transmitter. We consider a limited feedback system where each receiver knows its channel perfectly, but the transmitter is only provided with a finite number of channel feedback bits from each receiver. Using a random quantization argument, we quantify the throughput loss due to imperfect channel knowledge as a function of the feedback level. The quality of channel knowledge must improve proportional to the SNR in order to prevent interference-limitations, and we show that scaling the number of feedback bits linearly with the system SNR is sufficient to maintain a bounded rate loss. Finally, we compare our quantization strategy to an analog feedback scheme and show the superiority of quantized feedback.", "In this paper, we consider two different models of partial channel state information at the base station transmitter (CSIT) for multiple antenna broadcast channels: 1) the shape feedback model where the normalized channel vector of each user is available at the base station and 2) the limited feedback model where each user quantizes its channel vector according to a rotated codebook that is optimal in the sense of mean squared error and feeds back the codeword index. This paper is focused on characterizing the sum rate performance of both zero-forcing dirty paper coding (ZFDPC) systems and channel inversion (CI) systems under the given two partial CSIT models. Intuitively speaking, a system with shape feedback loses the sum rate gain of adaptive power allocation. However, shape feedback still provides enough channel knowledge for ZFDPC and CI to approach their own optimal throughput in the high signal-to-noise ratio (SNR) regime. As for limited feedback, we derive sum rate bounds for both signaling schemes and link their throughput performance to some basic properties of the quantization codebook. Interestingly, we find that limited feedback employing a fixed codebook leads to a sum rate ceiling for both schemes for asymptotically high SNR.", "We consider a MIMO fading broadcast channel and compute achievable ergodic rates when channel state information is acquired at the receivers via downlink training and explicit channel feedback is performed to provide transmitter channel state information (CSIT). Both “analog” and quantized (digital) channel feedback are analyzed, and digital feedback is shown to be potentially superior when the feedback channel uses per channel coefficient is larger than 1. Also, we show that by proper design of the digital feedback link, errors in the feedback have a relatively minor effect even if simple uncoded modulation is used on the feedback channel. We extend our analysis to the case of fading MIMO Multiaccess Channel (MIMO-MAC) in the feedback link, as well as to the case of a time-varying channel and feedback delay. We show that by exploiting the MIMO-MAC nature of the uplink channel, a fully scalable system with both downlink multiplexing gain and feedback redundancy proportional to the number of base station antennas can be achieved. Furthermore, the feedback strategy is optimized by a non-trivial combination of time-division and space-division multiple-access. For the case of delayed feedback, we show that in the realistic case where the fading process has (normalized) maximum Doppler frequency shift 0 F < 1=2, a fraction 1 2F of the optimal multiplexing gain is achievable. The general conclusion of this work is that very significant downlink throughput is achievable with simple and efficient channel state feedback, provided that the feedback link is properly designed.", "We analyze the sum-rate performance of a multi- antenna downlink system carrying more users than transmit antennas, with partial channel knowledge at the transmitter due to finite rate feedback. In order to exploit multiuser diversity, we show that the transmitter must have, in addition to directional information, information regarding the quality of each channel. Such information should reflect both the channel magnitude and the quantization error. Expressions for the SINR distribution and the sum-rate are derived, and tradeoffs between the number of feedback bits, the number of users, and the SNR are observed. In particular, for a target performance, having more users reduces feedback load." ] }
0812.3120
2061159444
Imperfect channel state information degrades the performance of multiple-input multiple-output (MIMO) communications; its effects on single-user (SU) and multiuser (MU) MIMO transmissions are quite different. In particular, MU-MIMO suffers from residual interuser interference due to imperfect channel state information while SU-MIMO only suffers from a power loss. This paper compares the throughput loss of both SU and MU-MIMO in the broadcast channel due to delay and channel quantization. Accurate closed-form approximations are derived for achievable rates for both SU and MU-MIMO. It is shown that SU-MIMO is relatively robust to delayed and quantized channel information, while MU-MIMO with zero-forcing precoding loses its spatial multiplexing gain with a fixed delay or fixed codebook size. Based on derived achievable rates, a mode switching algorithm is proposed, which switches between SU and MU-MIMO modes to improve the spectral efficiency based on average signal-to-noise ratio (SNR), normalized Doppler frequency, and the channel quantization codebook size. The operating regions for SU and MU modes with different delays and codebook sizes are determined, and they can be used to select the preferred mode. It is shown that the MU mode is active only when the normalized Doppler frequency is very small, and the codebook size is large.
Although previous studies show that the spatial multiplexing gain of MU-MIMO can be achieved with limited feedback, it requires the codebook size to increase with SNR and the number of transmit antennas. Even if such a requirement is satisfied, there is an inevitable rate loss due to quantization error, plus other CSIT imperfections such as estimation error and delay. In addition, most of prior work focused on the achievable spatial multiplexing gain, mainly based on the analysis of the rate loss due to imperfect CSIT, which is usually a loose bound @cite_38 @cite_14 @cite_35 . Such analysis cannot accurately characterize the throughput loss, and no comparison with SU-MIMO has been made. In this paper, we derive good approximations for the achievable throughput for both SU and MU MIMO systems with fixed channel information accuracy, i.e. with a fixed delay and a fixed quantization codebook size. We are interested in the following question: Based on this, we can select the one with the higher throughput as the transmission technique.
{ "cite_N": [ "@cite_38", "@cite_14", "@cite_35" ], "mid": [ "2074120933", "2164290759", "2541048740" ], "abstract": [ "Multiple transmit antennas in a downlink channel can provide tremendous capacity (i.e., multiplexing) gains, even when receivers have only single antennas. However, receiver and transmitter channel state information is generally required. In this correspondence, a system where each receiver has perfect channel knowledge, but the transmitter only receives quantized information regarding the channel instantiation is analyzed. The well-known zero-forcing transmission technique is considered, and simple expressions for the throughput degradation due to finite-rate feedback are derived. A key finding is that the feedback rate per mobile must be increased linearly with the signal-to-noise ratio (SNR) (in decibels) in order to achieve the full multiplexing gain. This is in sharp contrast to point-to-point multiple-input multiple-output (MIMO) systems, in which it is not necessary to increase the feedback rate as a function of the SNR", "Block diagonalization is a linear preceding technique for the multiple antenna broadcast (downlink) channel that involves transmission of multiple data streams to each receiver such that no multi-user interference is experienced at any of the receivers. This low-complexity scheme operates only a few dB away from capacity but requires very accurate channel knowledge at the transmitter. We consider a limited feedback system where each receiver knows its channel perfectly, but the transmitter is only provided with a finite number of channel feedback bits from each receiver. Using a random quantization argument, we quantify the throughput loss due to imperfect channel knowledge as a function of the feedback level. The quality of channel knowledge must improve proportional to the SNR in order to prevent interference-limitations, and we show that scaling the number of feedback bits linearly with the system SNR is sufficient to maintain a bounded rate loss. Finally, we compare our quantization strategy to an analog feedback scheme and show the superiority of quantized feedback.", "A multiple antenna broadcast channel (multiple transmit antennas, one antenna at each receiver) with imperfect channel state information available to the transmitter is considered. If perfect channel state information is available to the transmitter, then a multiplexing gain equal to the minimum of the number of transmit antennas and the number of receivers is achievable. On the other hand, if each receiver has identical fading statistics and the transmitter has no channel information, the maximum achievable multiplexing gain is only one. The focus of this paper is on determination of necessary and sufficient conditions on the rate at which CSIT quality must improve with SNR in order for full multiplexing gain to be achievable. The main result of the paper shows that scaling CSIT quality such that the CSIT error is dominated by the inverse of the SNR is both necessary and sufficient to achieve the full multiplexing gain as well as a bounded rate offset (i.e., the sum rate has no negative sub-logarithmic terms) in the compound channel setting." ] }
0812.3478
2952425088
The need for domain ontologies in mission critical applications such as risk management and hazard identification is becoming more and more pressing. Most research on ontology learning conducted in the academia remains unrealistic for real-world applications. One of the main problems is the dependence on non-incremental, rare knowledge and textual resources, and manually-crafted patterns and rules. This paper reports work in progress aiming to address such undesirable dependencies during ontology construction. Initial experiments using a working prototype of the system revealed promising potentials in automatically constructing high-quality domain ontologies using real-world texts.
Besides manual efforts, several ontology construction systems have also been developed in recent years which aim at generating domain ontologies. For example, @cite_18 employs standard natural language processing (NLP) tools and corpus analysis to extract and recognise domain terms. @cite_25 and @cite_8 are utilised to extract semantic relations between the terms. Similarly, the system @cite_36 makes use of non-incremental resources such as , and manually-crafted lexico-syntactic patterns to construct ontologies. In order to identify more complex relations, employs association rule learning. More recent work from @cite_9 extract terms and semantic relations through dependency structure analysis. The terms are mapped onto to obtain bags of senses. These senses are then clustered using cosine similarity. Semantic relations that consist of similar terms can be generalised using association rule mining algorithms for deducing statistically significant patterns. @cite_6 conducted a study on clustering and the associated tasks of feature extraction and selection, and similarity measurement for constructing ontologies. Contexts, appearing as sentences in which the terms occur, are used as features in their study. @cite_29 utilise dependency structure analysis to extract terms and relationships with the help of a controlled vocabulary called the and domain knowledge in the form of the .
{ "cite_N": [ "@cite_18", "@cite_8", "@cite_36", "@cite_9", "@cite_29", "@cite_6", "@cite_25" ], "mid": [ "", "2102381086", "1547207403", "2112671715", "2807866466", "", "128995279" ], "abstract": [ "", "Standard alphabetical procedures for organizing lexical information put together words that are spelled alike and scatter words with similar or related meanings haphazardly through the list. Unfortunately, there is no obvious alternative, no other simple way for lexicographers to keep track of what has been done or for readers to find the word they are looking for. But a frequent objection to this solution is that finding things on an alphabetical list can be tedious and time-consuming. Many people who would like to refer to a dictionary decide not to bother with it because finding the information would interrupt their work and break their train of thought.", "A reactive gaseous mixture which reacts in a localized heating zone to form a glass deposit on the inner wall of a tube is made to flow along the tube, and in the heating zone it is channelled around a cylindrical element which occupies much of the bore of the tube. The glass deposit is used for making glass fibres for telecommunications.", "Traditional text mining techniques transform free text into flat bags of words representation, which does not preserve sufficient semantics for the purpose of knowledge discovery. In this paper, we present a two-step procedure to mine generalized associations of semantic relations conveyed by the textual content of Web documents. First, RDF (resource description framework) metadata representing semantic relations are extracted from raw text using a myriad of natural language processing techniques. The relation extraction process also creates a term taxonomy in the form of a sense hierarchy inferred from WordNet. Then, a novel generalized association pattern mining algorithm (GP-Close) is applied to discover the underlying relation association patterns on RDF metadata. For pruning the large number of redundant overgeneralized patterns in relation pattern search space, the GP-Close algorithm adopts the notion of generalization closure for systematic overgeneralization reduction. The efficacy of our approach is demonstrated through empirical experiments conducted on an online database of terrorist activities", "We address the issue of extracting implicit and explicit relationships between entities in biomedical text. We argue that entities seldom occur in text in their simple form and that relationships in text relate the modified, complex forms of entities with each other. We present a rule-based method for (1) extraction of such complex entities and (2) relationships between them and (3) the conversion of such relationships into RDF. Furthermore, we present results that clearly demonstrate the utility of the generated RDF in discovering knowledge from text corpora by means of locating paths composed of the extracted relationships.", "", "The WordNet lexical database is now quite large and offers broad coverage of general lexical relations in English. As is evident in this volume, WordNet has been employed as a resource for many applications in natural language processing (NLP) and information retrieval (IR). However, many potentially useful lexical relations are currently missing from WordNet. Some of these relations, while useful for NLP and IR applications, are not necessarily appropriate for a general, domain-independent lexical database. For example, WordNet’s coverage of proper nouns is rather sparse, but proper nouns are often very important in application tasks. The standard way lexicographers find new relations is to look through huge lists of concordance lines. However, culling through long lists of concordance lines can be a rather daunting task (Church and Hanks, 1990), so a method that picks out those lines that are very likely to hold relations of interest should be an improvement over more traditional techniques. This chapter describes a method for the automatic discovery of WordNetstyle lexico-semantic relations by searching for corresponding lexico-syntactic patterns in large text collections. Large text corpora are now widely available, and can be viewed as vast resources from which to mine lexical, syntactic, and semantic information. This idea is reminiscent of what is known as “data mining” in the artificial intelligence literature (Fayyad and Uthurusamy, 1996), however, in this case the ore is raw text rather than tables of numerical data. The Lexico-Syntactic Pattern Extraction (LSPE) method is meant to be useful as an automated or semi-automated aid for lexicographers and builders of domain-dependent knowledge-bases. The LSPE technique is light-weight; it does not require a knowledge base or complex interpretation modules in order to suggest new WordNet relations." ] }
0812.4171
2950782526
We give a complexity dichotomy for the problem of computing the partition function of a weighted Boolean constraint satisfaction problem. Such a problem is parameterized by a set of rational-valued functions, which generalize constraints. Each function assigns a weight to every assignment to a set of Boolean variables. Our dichotomy extends previous work in which the weight functions were restricted to being non-negative. We represent a weight function as a product of the form (-1)^s g, where the polynomial s determines the sign of the weight and the non-negative function g determines its magnitude. We show that the problem of computing the partition function (the sum of the weights of all possible variable assignments) is in polynomial time if either every weight function can be defined by a "pure affine" magnitude with a quadratic sign polynomial or every function can be defined by a magnitude of "product type" with a linear sign polynomial. In all other cases, computing the partition function is FP^#P-complete.
The contribution of this paper (Theorem below) extends Theorem to constraint languages @math containing arbitrary rational-valued functions. This is an interesting extension since functions with negative values can cause cancellations and may make the partition function easier to compute. In a related context, recall the sharp distinction in complexity between computing the permanent and the determinant of a matrix. Independently, Cai, Lu and Xia have recently found a wider generalization, giving a dichotomy for the case where @math can be any set of complex-valued functions @cite_17 .
{ "cite_N": [ "@cite_17" ], "mid": [ "2097738137" ], "abstract": [ "This paper gives a dichotomy theorem for the complexity of computing the partition function of an instance of a weighted Boolean constraint satisfaction problem. The problem is parameterized by a finite set @math of nonnegative functions that may be used to assign weights to the configurations (feasible solutions) of a problem instance. Classical constraint satisfaction problems correspond to the special case of 0,1-valued functions. We show that computing the partition function, i.e., the sum of the weights of all configurations, is @math -complete unless either (1) every function in @math is of “product type,” or (2) every function in @math is “pure affine.” In the remaining cases, computing the partition function is in P." ] }
0812.4171
2950782526
We give a complexity dichotomy for the problem of computing the partition function of a weighted Boolean constraint satisfaction problem. Such a problem is parameterized by a set of rational-valued functions, which generalize constraints. Each function assigns a weight to every assignment to a set of Boolean variables. Our dichotomy extends previous work in which the weight functions were restricted to being non-negative. We represent a weight function as a product of the form (-1)^s g, where the polynomial s determines the sign of the weight and the non-negative function g determines its magnitude. We show that the problem of computing the partition function (the sum of the weights of all possible variable assignments) is in polynomial time if either every weight function can be defined by a "pure affine" magnitude with a quadratic sign polynomial or every function can be defined by a magnitude of "product type" with a linear sign polynomial. In all other cases, computing the partition function is FP^#P-complete.
The case of mixed signs has been been considered previously by Goldberg, Grohe, Jerrum and Thurley @cite_12 , in the case of one symmetric binary function on an arbitrary finite domain. Their theorem generalizes that of Bulatov and Grohe @cite_21 for the non-negative case. @cite_12 give two examples, which can also be expressed as Boolean weighted @math , and fall within the scope of this paper. The first appeared as an open problem in @cite_21 . The complexity of these problems can be deduced from @cite_12 and from the results of this paper.
{ "cite_N": [ "@cite_21", "@cite_12" ], "mid": [ "2118223706", "2092436554" ], "abstract": [ "We give a complexity theoretic classification of the counting versions of so-called H-colouring problems for graphs H that may have multiple edges between the same pair of vertices. More generally, we study the problem of computing a weighted sum of homomorphisms to a weighted graph H.The problem has two interesting alternative formulations: first, it is equivalent to computing the partition function of a spin system as studied in statistical physics. And second, it is equivalent to counting the solutions to a constraint satisfaction problem whose constraint language consists of two equivalence relations.In a nutshell, our result says that the problem is in polynomial time if the adjacency matrix of H has row rank 1, and #P-hard otherwise.", "Partition functions, also known as homomorphism functions, form a rich family of graph invariants that contain combinatorial invariants such as the number of @math -colorings or the number of independent sets of a graph and also the partition functions of certain “spin glass” models of statistical physics such as the Ising model. Building on earlier work by Dyer and Greenhill [Random Structures Algorithms, 17 (2000), pp. 260-289] and Bulatov and Grohe [Theoret. Comput. Sci., 348 (2005), pp. 148-186], we completely classify the computational complexity of partition functions. Our main result is a dichotomy theorem stating that every partition function is either computable in polynomial time or #P-complete. Partition functions are described by symmetric matrices with real entries, and we prove that it is decidable in polynomial time in terms of the matrix whether a given partition function is in polynomial time or #P-complete. While in general it is very complicated to give an explicit algebraic or combinatorial description of the tractable cases, for partition functions described by Hadamard matrices (these turn out to be central in our proofs) we obtain a simple algebraic tractability criterion, which says that the tractable cases are those “representable” by a quadratic polynomial over the field @math ." ] }
0812.2049
2950274370
We address the problem of finding a "best" deterministic query answer to a query over a probabilistic database. For this purpose, we propose the notion of a consensus world (or a consensus answer) which is a deterministic world (answer) that minimizes the expected distance to the possible worlds (answers). This problem can be seen as a generalization of the well-studied inconsistent information aggregation problems (e.g. rank aggregation) to probabilistic databases. We consider this problem for various types of queries including SPJ queries, queries, group-by aggregate queries, and clustering. For different distance metrics, we obtain polynomial time optimal or approximation algorithms for computing the consensus answers (or prove NP-hardness). Most of our results are for a general probabilistic database model, called and xor tree model , which significantly generalizes previous probabilistic database models like x-tuples and block-independent disjoint models, and is of independent interest.
There has been much work on managing probabilistic, uncertain, incomplete, and or fuzzy data in database systems and this area has received renewed attention in the last few years (see e.g. @cite_32 @cite_18 @cite_6 @cite_15 @cite_34 @cite_10 @cite_17 @cite_40 @cite_19 @cite_26 ). This work has spanned a range of issues from theoretical development of data models and data languages, to practical implementation issues such as indexing techniques. In terms of representation power, most of this work has either assumed independence between the tuples @cite_34 @cite_40 , or has restricted the correlations that can be modeled @cite_18 @cite_6 @cite_12 @cite_21 . Several approaches for modeling complex correlations in probabilistic databases have also been proposed @cite_42 @cite_9 @cite_4 @cite_7 .
{ "cite_N": [ "@cite_18", "@cite_26", "@cite_4", "@cite_7", "@cite_9", "@cite_21", "@cite_42", "@cite_32", "@cite_6", "@cite_19", "@cite_40", "@cite_15", "@cite_34", "@cite_10", "@cite_12", "@cite_17" ], "mid": [ "2125791539", "", "", "2114157818", "2164688391", "2129035130", "2120825705", "1990391007", "2024400846", "1486776102", "", "", "1992609556", "92013997", "178066904", "" ], "abstract": [ "It is often desirable to represent in a database, entities whose properties cannot be deterministically classified. The authors develop a data model that includes probabilities associated with the values of the attributes. The notion of missing probabilities is introduced for partially specified probability distributions. This model offers a richer descriptive language allowing the database to more accurately reflect the uncertain real world. Probabilistic analogs to the basic relational operators are defined and their correctness is studied. A set of operators that have no counterpart in conventional relational systems is presented. >", "", "", "Several real-world applications need to effectively manage and reason about large amounts of data that are inherently uncertain. For instance, pervasive computing applications must constantly reason about volumes of noisy sensory readings for a variety of reasons, including motion prediction and human behavior modeling. Such probabilistic data analyses require sophisticated machine-learning tools that can effectively model the complex spatio temporal correlation patterns present in uncertain sensory data. Unfortunately, to date, most existing approaches to probabilistic database systems have relied on somewhat simplistic models of uncertainty that can be easily mapped onto existing relational architectures: Probabilistic information is typically associated with individual data tuples, with only limited or no support for effectively capturing and reasoning about complex data correlations. In this paper, we introduce BayesStore, a novel probabilistic data management architecture built on the principle of handling statistical models and probabilistic inference tools as first-class citizens of the database system. Adopting a machine-learning view, BAYESSTORE employs concise statistical relational models to effectively encode the correlation patterns between uncertain data, and promotes probabilistic inference and statistical model manipulation as part of the standard DBMS operator repertoire to support efficient and sound query processing. We present BAYESSTORE's uncertainty model based on a novel, first-order statistical model, and we redefine traditional query processing operators, to manipulate the data and the probabilistic models of the database in an efficient manner. Finally, we validate our approach, by demonstrating the value of exploiting data correlations during query processing, and by evaluating a number of optimizations which significantly accelerate query processing.", "Incomplete information arises naturally in numerous data management applications. Recently, several researchers have studied query processing in the context of incomplete information. Most work has combined the syntax of a traditional query language like relational algebra with a nonstandard semantics such as certain or ranked possible answers. There are now also languages with special features to deal with uncertainty. However, to the standards of the data management community, to date no language proposal has been made that can be considered a natural analog to SQL or relational algebra for the case of incomplete information. In this paper we propose such a language, World-set Algebra, which satisfies the robustness criteria and analogies to relational algebra that we expect. The language supports the contemplation on alternatives and can thus map from a complete database to an incomplete one comprising several possible worlds. We show that World-set Algebra is conservative over relational algebra in the sense that any query that maps from a complete database to a complete database (a complete-to-complete query) is equivalent to a relational algebra query. Moreover, we give an efficient algorithm for effecting this translation. We then study algebraic query optimization of such queries. We argue that query languages with explicit constructs for handling uncertainty allow for the more natural and simple expression of many real-world decision support queries. The results of this paper not only suggest a language for specifying queries in this way, but also allow for their efficient evaluation in any relational database management system.", "This paper explores an inherent tension in modeling and querying uncertain data: simple, intuitive representations of uncertain data capture many application requirements, but these representations are generally incomplete―standard operations over the data may result in unrepresentable types of uncertainty. Complete models are theoretically attractive, but they can be nonintuitive and more complex than necessary for many applications. To address this tension, we propose a two-layer approach to managing uncertain data: an underlying logical model that is complete, and one or more working models that are easier to understand, visualize, and query, but may lose some information. We explore the space of incomplete working models, place several of them in a strict hierarchy based on expressive power, and study their closure properties. We describe how the two-layer approach is being used in our prototype DBMS for uncertain data, and we identify a number of interesting open problems to fully realize the approach.", "Probabilistic databases have received considerable attention recently due to the need for storing uncertain data produced by many real world applications. The widespread use of probabilistic databases is hampered by two limitations: (1) current probabilistic databases make simplistic assumptions about the data (e.g., complete independence among tuples) that make it difficult to use them in applications that naturally produce correlated data, and (2) most probabilistic databases can only answer a restricted subset of the queries that can be expressed using traditional query languages. We address both these limitations by proposing a framework that can represent not only probabilistic tuples, but also correlations that may be present among them. Our proposed framework naturally lends itself to the possible world semantics thus preserving the precise query semantics extant in current probabilistic databases. We develop an efficient strategy for query evaluation over such probabilistic databases by casting the query processing problem as an inference problem in an appropriately constructed probabilistic graphical model. We present several optimizations specific to probabilistic databases that enable efficient query evaluation. We validate our approach by presenting an experimental evaluation that illustrates the effectiveness of our techniques at answering various queries using real and synthetic datasets.", "ABSTRACT This paper concerns the semantics of Codd's relational model of data. Formulated are precise conditions that should be satisfied in a semantically meaningful extension of the usual relational operators, such as projection, selection, union, and join, from operators on relations to operators on tables with “null values” of various kinds allowed. These conditions require that the system be safe in the sense that no incorrect conclusion is derivable by using a specified subset Ω of the relational operators; and that it be complete in the sense that all valid conclusions expressible by relational expressions using operators in Ω are in fact derivable in this system. Two such systems of practical interest are shown. The first, based on the usual Codd's null values, supports projection and selection. The second, based on many different (“marked”) null values or variables allowed to appear in a table, is shown to correctly support projection, positive selection (with no negation occurring in the selection condition), union, and renaming of attributes, which allows for processing arbitrary conjunctive queries. A very desirable property enjoyed by this system is that all relational operators on tables are performed in exactly the same way as in the case of the usual relations. A third system, mainly of theoretical interest, supporting projection, selection, union, join, and renaming, is also discussed. Under a so-called closed world assumption, it can also handle the operator of difference. It is based on a device called a conditional table and is crucial to the proof of the correctness of the second system. All systems considered allow for relational expressions containing arbitrarily many different relation symbols, and no form of the universal relation assumption is required. Categories and Subject Descriptors: H.2.3 [Database Management]: Languages— query languages; H.2.4 [Database Management]: Systems— query processing General Terms: Theory", "Probability theory is mathematically the best understood paradigm for modeling and manipulating uncertain information. Probabilities of complex events can be computed from those of basic events on which they depend, using any of a number of strategies. Which strategy is appropriate depends very much on the known interdependencies among the events involved. Previous work on probabilistic databases has assumed a fixed and restrictive combination strategy (e.g., assuming all events are pairwise independent). In this article, we characterize, using postulates, whole classes of strategies for conjunction, disjunction, and negation, meaningful from the viewpoint of probability theory. (1) We propose a probabilistic relational data model and a generic probabilistic relational algebra that neatly captures various strategies satisfying the postulates, within a single unified framework. (2) We show that as long as the chosen strategies can be computed in polynomial time, queries in the positive fragment of the probabilistic relational algebra have essentially the same data complexity as classical relational algebra. (3) We establish various containments and equivalences between algebraic expressions, similar in spirit to those in classical algebra. (4) We develop algorithms for maintaining materialized probabilistic views. (5) Based on these ideas, we have developed a prototype probabilistic database system called ProbView on top of Dbase V.0. We validate our complexity results with experiments and show that rewriting certain types of queries to other equivalent forms often yields substantial savings.", "Trio is a new database system that manages not only data, but also the accuracy and lineage of the data. Approximate (uncertain, probabilistic, incomplete, fuzzy, and imprecise!) databases have been proposed in the past, and the lineage problem also has been studied. The goals of the Trio project are to distill previous work into a simple and usable model, design a query language as an understandable extension to SQL, and most importantly build a working system---a system that augments conventional data management with both accuracy and lineage as an integral part of the data. This paper provides numerous motivating applications for Trio and lays out preliminary plans for the data model, query language, and prototype system.", "", "", "We present a probabilistic relational algebra (PRA) which is a generalization of standard relational algebra. In PRA, tuples are assigned probabilistic weights giving the probability that a tuple belongs to a relation. Based on intensional semantics, the tuple weights of the result of a PRA expression always conform to the underlying probabilistic model. We also show for which expressions extensional semantics yields the same results. Furthermore, we discuss complexity issues and indicate possibilities for optimization. With regard to databases, the approach allows for representing imprecise attribute values, whereas for information retrieval, probabilistic document indexing and probabilistic search term weighting can be modeled. We introduce the concept of vague predicates which yield probabilistic weights instead of Boolean values, thus allowing for queries with vague selection conditions. With these features, PRA implements uncertainty and vagueness in combination with the relational model.", "", "", "" ] }
0812.2049
2950274370
We address the problem of finding a "best" deterministic query answer to a query over a probabilistic database. For this purpose, we propose the notion of a consensus world (or a consensus answer) which is a deterministic world (answer) that minimizes the expected distance to the possible worlds (answers). This problem can be seen as a generalization of the well-studied inconsistent information aggregation problems (e.g. rank aggregation) to probabilistic databases. We consider this problem for various types of queries including SPJ queries, queries, group-by aggregate queries, and clustering. For different distance metrics, we obtain polynomial time optimal or approximation algorithms for computing the consensus answers (or prove NP-hardness). Most of our results are for a general probabilistic database model, called and xor tree model , which significantly generalizes previous probabilistic database models like x-tuples and block-independent disjoint models, and is of independent interest.
In recent years, there has also been much work on efficiently answering different types of queries over probabilistic databases. @cite_29 first considered the problem of ranking over probabilistic databases, and proposed two ranking functions to combine the tuple scores and probabilities. @cite_35 presented improved algorithms for the same ranking functions. Zhang and Chomicki @cite_36 presented a desiderata for ranking functions and propose Global queries. Ming @cite_5 @cite_3 recently presented a different ranking function called Probabilistic threshold queries . Finally, @cite_37 also present a semantics of ranking functions and a new ranking function called expected rank . In a recent work, we proposed a parameterized ranking function, and presented general algorithms for evaluating them @cite_31 Other types of queries have also been recently considered over probabilistic databases (e.g. clustering @cite_16 , nearest neighbors @cite_41 etc.).
{ "cite_N": [ "@cite_35", "@cite_37", "@cite_36", "@cite_41", "@cite_29", "@cite_3", "@cite_5", "@cite_31", "@cite_16" ], "mid": [ "", "2140237757", "2171560571", "2133246278", "2138271690", "2041763948", "", "", "2110553125" ], "abstract": [ "", "When dealing with massive quantities of data, top-k queries are a powerful technique for returning only the k most relevant tuples for inspection, based on a scoring function. The problem of efficiently answering such ranking queries has been studied and analyzed extensively within traditional database settings. The importance of the top-k is perhaps even greater in probabilistic databases, where a relation can encode exponentially many possible worlds. There have been several recent attempts to propose definitions and algorithms for ranking queries over probabilistic data. However, these all lack many of the intuitive properties of a top-k over deterministic data. Specifically, we define a number of fundamental properties, including exact-k, containment, unique-rank, value-invariance, and stability, which are all satisfied by ranking queries on certain data. We argue that all these conditions should also be fulfilled by any reasonable definition for ranking uncertain data. Unfortunately, none of the existing definitions is able to achieve this. To remedy this shortcoming, this work proposes an intuitive new approach of expected rank. This uses the well-founded notion of the expected rank of each tuple across all possible worlds as the basis of the ranking. We are able to prove that, in contrast to all existing approaches, the expected rank satisfies all the required properties for a ranking query. We provide efficient solutions to compute this ranking across the major models of uncertain data, such as attribute-level and tuple-level uncertainty. For an uncertain relation of N tuples, the processing cost is O(N logN)—no worse than simply sorting the relation. In settings where there is a high cost for generating each tuple in turn, we provide pruning techniques based on probabilistic tail bounds that can terminate the search early and guarantee that the top-k has been found. Finally, a comprehensive experimental study confirms the effectiveness of our approach.", "We formulate three intuitive semantic properties for top-k queries in probabilistic databases, and propose Global-Topk query semantics which satisfies all of them. We provide a dynamic programming algorithm to evaluate top-k queries under Global-Topk in simple probabilistic relations. For general probabilistic relations, we show a polynomial reduction to the simple case. Our analysis shows that the complexity of query evaluation is linear in k and at most quadratic in database size.", "Uncertainty pervades many domains in our lives. Current real-life applications, e.g., location tracking using GPS devices or cell phones, multimedia feature extraction, and sensor data management, deal with different kinds of uncertainty. Finding the nearest neighbor objects to a given query point is an important query type in these applications. In this paper, we study the problem of finding objects with the highest marginal probability of being the nearest neighbors to a query object. We adopt a general uncertainty model allowing for data and query uncertainty. Under this model, we define new query semantics, and provide several efficient evaluation algorithms. We analyze the cost factors involved in query evaluation, and present novel techniques to address the trade-offs among these factors. We give multiple extensions to our techniques including handling dependencies among data objects, and answering threshold queries. We conduct an extensive experimental study to evaluate our techniques on both real and synthetic data.", "Top-k processing in uncertain databases is semantically and computationally different from traditional top-k processing. The interplay between score and uncertainty makes traditional techniques inapplicable. We introduce new probabilistic formulations for top-k queries. Our formulations are based on \"marriage\" of traditional top-k semantics and possible worlds semantics. In the light of these formulations, we construct a framework that encapsulates a state space model and efficient query processing techniques to tackle the challenges of uncertain data settings. We prove that our techniques are optimal in terms of the number of accessed tuples and materialized search states. Our experiments show the efficiency of our techniques under different data distributions with orders of magnitude improvement over naive materialization of possible worlds.", "Uncertain data is inherent in a few important applications such as environmental surveillance and mobile object tracking. Top-k queries (also known as ranking queries) are often natural and useful in analyzing uncertain data in those applications. In this paper, we study the problem of answering probabilistic threshold top-k queries on uncertain data, which computes uncertain records taking a probability of at least p to be in the top-k list where p is a user specified probability threshold. We present an efficient exact algorithm, a fast sampling algorithm, and a Poisson approximation based algorithm. An empirical study using real and synthetic data sets verifies the effectiveness of probabilistic threshold top-k queries and the efficiency of our methods.", "", "", "There is an increasing quantity of data with uncertainty arising from applications such as sensor network measurements, record linkage, and as output of mining algorithms. This uncertainty is typically formalized as probability density functions over tuple values. Beyond storing and processing such data in a DBMS, it is necessary to perform other data analysis tasks such as data mining. We study the core mining problem of clustering on uncertain data, and define appropriate natural generalizations of standard clustering optimization criteria. Two variations arise, depending on whether a point is automatically associated with its optimal center, or whether it must be assigned to a fixed cluster no matter where it is actually located. For uncertain versions of k-means and k-median, we show reductions to their corresponding weighted versions on data with no uncertainties. These are simple in the unassigned case, but require some care for the assigned version. Our most interesting results are for uncertain k-center, which generalizes both traditional k-center and k-median objectives. We show a variety of bicriteria approximation algorithms. One picks O(ke--1log2n) centers and achieves a (1 + e) approximation to the best uncertain k-centers. Another picks 2k centers and achieves a constant factor approximation. Collectively, these results are the first known guaranteed approximation algorithms for the problems of clustering uncertain data." ] }
0812.0790
2952665523
The paper introduces the notion of off-line justification for Answer Set Programming (ASP). Justifications provide a graph-based explanation of the truth value of an atom w.r.t. a given answer set. The paper extends also this notion to provide justification of atoms during the computation of an answer set (on-line justification), and presents an integration of on-line justifications within the computation model of Smodels. Off-line and on-line justifications provide useful tools to enhance understanding of ASP, and they offer a basic data structure to support methodologies and tools for debugging answer set programs. A preliminary implementation has been developed in ASP-PROLOG. (To appear in Theory and Practice of Logic Programming (TPLP))
[1.] Program instrumentation and execution: assertion-based debugging (e.g., @cite_21 ) and algorithmic debugging @cite_25 are examples of approaches focused on this first phase.
{ "cite_N": [ "@cite_21", "@cite_25" ], "mid": [ "1588322402", "2068981034" ], "abstract": [ "As constraint logic programming matures and larger applications are built, an increased need arises for advanced development and debugging environments. Assertions are linguistic constructions which allow expressing properties of programs. Classical examples of assertions are type declarations. However, herein we are interested in supporting a more general setting [3, 1] in which, on one hand assertions can be of a more general nature, including properties which are statically undecidable, and, on the other, only a small number of assertions may be present in the program, i.e., the assertions are optional. In particular, we do not wish to limit the programming language or the language of assertions unnecessarily in order to make the assertions statically decidable. Consequently, the proposed framework needs to deal throughout with approximations [2].", "The notion of program correctness with respect to an interpretation is defined for a class of programming languages. Under this definition, if a program terminates with an incorrect output then it contains an incorrect procedure. Algorithms for detecting incorrect procedures are developed. These algorithms formalize what experienced programmers may know already.A logic program implementation of these algorithms is described. Its performance suggests that the algorithms can be the backbone of debugging aids that go far beyond what is offered by current programming environments.Applications of algorithmic debugging to automatic program construction are explored." ] }
0812.0790
2952665523
The paper introduces the notion of off-line justification for Answer Set Programming (ASP). Justifications provide a graph-based explanation of the truth value of an atom w.r.t. a given answer set. The paper extends also this notion to provide justification of atoms during the computation of an answer set (on-line justification), and presents an integration of on-line justifications within the computation model of Smodels. Off-line and on-line justifications provide useful tools to enhance understanding of ASP, and they offer a basic data structure to support methodologies and tools for debugging answer set programs. A preliminary implementation has been developed in ASP-PROLOG. (To appear in Theory and Practice of Logic Programming (TPLP))
[2.] Data Collection: focuses on from the execution data necessary to understand it, as in event-based debugging @cite_55 , tracing, and explanation-based debugging @cite_39 @cite_50 .
{ "cite_N": [ "@cite_55", "@cite_50", "@cite_39" ], "mid": [ "2105844064", "1539229560", "2050349524" ], "abstract": [ "This paper suggests an approach to the development of software testing and debugging automation tools based on precise program behavior models. The program behavior model is defined as a set of events (event trace) with two basic binary relations over events -- precedence and inclusion, and represents the temporal relationship between actions. A language for the computations over event traces is developed that provides a basis for assertion checking, debugging queries, execution profiles, and performance measurements. The approach is nondestructive, since assertion texts are separated from the target program source code and can be maintained independently. Assertions can capture the dynamic properties of a particular target program and can formalize the general knowledge of typical bugs and debugging strategies. An event grammar provides a sound basis for assertion language implementation via target program automatic instrumentation. An implementation architecture and preliminary experiments with a prototype assertion checker for the C programming language are discussed.", "", "Abstract Traces of program executions are a helpful source of information for program debugging. They, however, give a picture of program executions at such a low level that users often have difficulties to interpret the information. Opium, our extendable trace analyzer, is connected to a “standard” Prolog tracer. Opium is programmable and extendable. It provides a trace query language and abstract views of executions. Users can therefore examine program executions at the levels of abstraction which suit them. Opium has shown its capabilities to build abstract tracers and automated debugging facilities. This article describes in depth the trace query mechanism, from the model to its implementation. Characteristic examples are detailed. Extensions written so far on top of the trace query mechanism are listed. Two recent extensions are presented: the abstract tracers for the LO (Linear Objects) and the CHR (Constraint Handling Rules) languages. These two extensions were specified and implemented within a few days. They show how to use Opium for real applications." ] }
0812.0790
2952665523
The paper introduces the notion of off-line justification for Answer Set Programming (ASP). Justifications provide a graph-based explanation of the truth value of an atom w.r.t. a given answer set. The paper extends also this notion to provide justification of atoms during the computation of an answer set (on-line justification), and presents an integration of on-line justifications within the computation model of Smodels. Off-line and on-line justifications provide useful tools to enhance understanding of ASP, and they offer a basic data structure to support methodologies and tools for debugging answer set programs. A preliminary implementation has been developed in ASP-PROLOG. (To appear in Theory and Practice of Logic Programming (TPLP))
The use of graphs proposed in this paper is complementary to the view proposed by other authors, who use graph structures as a mean to describe answer set programs, to make structural properties explicit, and to support the development of the program execution. @cite_45 @cite_27 , (a.k.a. ) of answer set programs are employed to model the computation of answer sets as special forms of graph coloring. A comprehensive survey of alternative graph representations of answer set programs, and their properties with respect to the problem of answer set characterization, has been presented in @cite_7 @cite_10 . In particular, the authors provide characterizations of desirable graph representations, relating the existence of answer sets to the presence of cycles and the use of coloring to characterize properties of programs (e.g., consistency). We conjuncture that the outcome of a successful coloring of an EDG @cite_7 to represent one answer set can be projected, modulo non-obvious transformations, to an off-line graph and vice versa. On the other hand, the notion of on-line justification does not seem to have a direct relation to the graph representations presented in the cited works.
{ "cite_N": [ "@cite_27", "@cite_45", "@cite_10", "@cite_7" ], "mid": [ "2166174694", "1533970988", "2070951159", "95993446" ], "abstract": [ "We investigate the usage of rule dependency graphs and their colorings for characterizing and computing answer sets of logic programs. This approach provides us with insights into the interplay between rules when inducing answer sets. We start with different characterizations of answer sets in terms of totally colored dependency graphs that differ in graph-theoretical aspects. We then develop a series of operational characterizations of answer sets in terms of operators on partial colorings. In analogy to the notion of a derivation in proof theory, our operational characterizations are expressed as (non-deterministically formed) sequences of colorings, turning an uncolored graph into a totally colored one. In this way, we obtain an operational framework in which different combinations of operators result in different formal properties. Among others, we identify the basic strategy employed by the noMoRe system and justify its algorithmic approach. Furthermore, we distinguish operations corresponding to Fitting's operator as well as to well-founded semantics.", "We present a new answer set solver, called nomore++, along with its underlying theoretical foundations. A distinguishing feature is that it treats heads and bodies equitably as computational objects. Apart from its operational foundations, we show how it improves on previous work through its new lookahead and its computational strategy of maintaining unfounded-freeness. We underpin our claims by selected experimental results.", "Logic programs under Answer Sets semantics can be studied, and actual computation can be carried out, by means of representing them by directed graphs. Several reductions of logic programs to directed graphs are now available. We compare our proposed representation, called Extended Dependency Graph, to the Block Graph representation recently defined by Linke [Proc. IJCAI-2001, 2001, pp. 641-648]. On the relevant fragment of well-founded irreducible programs, extended dependency and block graph turns out to be isomorphic. So, we argue that graph representation of general logic programs should be abandoned in favor of graph representation of well-founded irreducible programs, which are more concise, more uniform in structure while being equally expressive.", "characterized in terms of properties of Rule Graphs. We show that, unfortunately, also the RG is ambiguous with respect to the answer set semantics, while the EDG is isomorphic to the program it represents. We argue that the reason of this drawback of the RG as a software engineering tool relies in the absence of a distinction between the different kinds of connections between cycles. Finally, we suggest that properties of a program might be characterized(andchecked)intermsofadmissiblecolorings of the EDG." ] }
0812.0147
2950083903
In this paper we analyze the performance of Warning Propagation, a popular message passing algorithm. We show that for 3CNF formulas drawn from a certain distribution over random satisfiable 3CNF formulas, commonly referred to as the planted-assignment distribution, running Warning Propagation in the standard way (run message passing until convergence, simplify the formula according to the resulting assignment, and satisfy the remaining subformula, if necessary, using a simple "off the shelf" heuristic) results in a satisfying assignment when the clause-variable ratio is a sufficiently large constant.
As for relevant results in random graph theory, the seminal work of Alon and Kahale @cite_28 paved the road towards dealing with large-constant-degree planted distributions. @cite_28 present an algorithm that @math @math -colors planted @math -colorable graphs (the distribution of graphs generated by partitioning the @math vertices into @math equally-sized color classes, and including every edge connecting two different color classes with probability @math ; commonly denoted @math ) with a sufficiently large constant expected degree. Building upon the techniques introduced in @cite_28 , Chen and Frieze @cite_14 present an algorithm that 2-colors large constant degree planted 3-uniform bipartite hypergraphs, and Flaxman @cite_7 presents an algorithm for satisfying large-constant clause-variable ratio planted 3SAT instances.
{ "cite_N": [ "@cite_28", "@cite_14", "@cite_7" ], "mid": [ "2079035346", "1898334569", "1994098212" ], "abstract": [ "Let G3n,p,3 be a random 3-colorable graph on a set of 3n vertices generated as follows. First, split the vertices arbitrarily into three equal color classes, and then choose every pair of vertices of distinct color classes, randomly and independently, to be edges with probability p. We describe a polynomial-time algorithm that finds a proper 3-coloring of G3n,p,3 with high probability, whenever p @math c n, where c is a sufficiently large absolute constant. This settles a problem of Blum and Spencer, who asked if an algorithm can be designed that works almost surely for p @math polylog(n) n [J. Algorithms, 19 (1995), pp. 204--234]. The algorithm can be extended to produce optimal k-colorings of random k-colorable graphs in a similar model as well as in various related models. Implementation results show that the algorithm performs very well in practice even for moderate values of c.", "It is NP-Hard to find a proper 2-coloring of a given 2-colorable (bipartite) hypergraph H. We consider algorithms that will color such a hypergraph using few colors in polynomial time. The results of the paper can be summarized as follows: Let n denote the number of vertices of H and m the number of edges, (i) For bipartite hypergraphs of dimension k there is a polynomial time algorithm which produces a proper coloring using min ( O(n^ 1 - 1 k ),O((m n)^ 1 k - 1 ) )colors, (ii) For 3-uniform bipartite hypergraphs, the bound is reduced to O(n2 9). (iii) For a class of dense 3-uniform bipartite hypergraphs, we have a randomized algorithm which can color optimally. (iv) For a model of random bipartite hypergraphs with edge probability p≥ dn−2, d > O a sufficiently large constant, we can almost surely find a proper 2-coloring.", "Let I be a random 3CNF formula generated by choosing a truth assignment φ for variables x 1 , ..., x n uniformly at random and including every clause with i literals set true by φ with probability p i , independently. We show that for any 0 ≤ η 2 , η 3 ≤ 1 there is a constant d min so that for all d ≥ d min , a spectral algorithm similar to the graph coloring algorithm of [1] will find a satisfying assignment with high probability for p 1 = d n2, p 2 = η 2 d n2, and p3 = η 3 d n2. Appropriately setting η 2 and η 3 yields natural distributions on satisfiable 3CNFs, not-all-equal-sat 3CNFs, and exactly-one-sat 3CNFs." ] }
0812.0147
2950083903
In this paper we analyze the performance of Warning Propagation, a popular message passing algorithm. We show that for 3CNF formulas drawn from a certain distribution over random satisfiable 3CNF formulas, commonly referred to as the planted-assignment distribution, running Warning Propagation in the standard way (run message passing until convergence, simplify the formula according to the resulting assignment, and satisfy the remaining subformula, if necessary, using a simple "off the shelf" heuristic) results in a satisfying assignment when the clause-variable ratio is a sufficiently large constant.
Another difference between our work and that of @cite_28 @cite_14 @cite_7 is that unlike the algorithms analyzed in those other papers, WP is a randomized algorithm, a fact which makes its analysis more difficult. We could have simplified our analysis had we changed WP to be deterministic (for example, by initializing all clause-variable messages to 1 in step 2 of the algorithm), but there are good reasons why WP is randomized. For example, it can be shown that (the randomized version) WP converges with probability 1 on 2CNF formulas that form one cycle of implications, but might not converge if step 4 does not introduce fresh randomness in every iteration of the algorithm (details omitted).
{ "cite_N": [ "@cite_28", "@cite_14", "@cite_7" ], "mid": [ "2079035346", "1898334569", "1994098212" ], "abstract": [ "Let G3n,p,3 be a random 3-colorable graph on a set of 3n vertices generated as follows. First, split the vertices arbitrarily into three equal color classes, and then choose every pair of vertices of distinct color classes, randomly and independently, to be edges with probability p. We describe a polynomial-time algorithm that finds a proper 3-coloring of G3n,p,3 with high probability, whenever p @math c n, where c is a sufficiently large absolute constant. This settles a problem of Blum and Spencer, who asked if an algorithm can be designed that works almost surely for p @math polylog(n) n [J. Algorithms, 19 (1995), pp. 204--234]. The algorithm can be extended to produce optimal k-colorings of random k-colorable graphs in a similar model as well as in various related models. Implementation results show that the algorithm performs very well in practice even for moderate values of c.", "It is NP-Hard to find a proper 2-coloring of a given 2-colorable (bipartite) hypergraph H. We consider algorithms that will color such a hypergraph using few colors in polynomial time. The results of the paper can be summarized as follows: Let n denote the number of vertices of H and m the number of edges, (i) For bipartite hypergraphs of dimension k there is a polynomial time algorithm which produces a proper coloring using min ( O(n^ 1 - 1 k ),O((m n)^ 1 k - 1 ) )colors, (ii) For 3-uniform bipartite hypergraphs, the bound is reduced to O(n2 9). (iii) For a class of dense 3-uniform bipartite hypergraphs, we have a randomized algorithm which can color optimally. (iv) For a model of random bipartite hypergraphs with edge probability p≥ dn−2, d > O a sufficiently large constant, we can almost surely find a proper 2-coloring.", "Let I be a random 3CNF formula generated by choosing a truth assignment φ for variables x 1 , ..., x n uniformly at random and including every clause with i literals set true by φ with probability p i , independently. We show that for any 0 ≤ η 2 , η 3 ≤ 1 there is a constant d min so that for all d ≥ d min , a spectral algorithm similar to the graph coloring algorithm of [1] will find a satisfying assignment with high probability for p 1 = d n2, p 2 = η 2 d n2, and p3 = η 3 d n2. Appropriately setting η 2 and η 3 yields natural distributions on satisfiable 3CNFs, not-all-equal-sat 3CNFs, and exactly-one-sat 3CNFs." ] }
0812.0423
2951084471
We consider minimization of functions that are compositions of convex or prox-regular functions (possibly extended-valued) with smooth vector functions. A wide variety of important optimization problems fall into this framework. We describe an algorithmic framework based on a subproblem constructed from a linearized approximation to the objective and a regularization term. Properties of local solutions of this subproblem underlie both a global convergence result and an identification property of the active manifold containing the solution of the original problem. Preliminary computational results on both convex and nonconvex examples are promising.
@cite_7 solve a problem of the form ) for the case of nonlinear programming, where @math is the sum of the objective function @math and the indicator function for the equalities and the inequalities defining the feasible region. The resulting step can be enhanced by solving an EQP.
{ "cite_N": [ "@cite_7" ], "mid": [ "2162151148" ], "abstract": [ "We introduce a new class of multifunctions whose graphs under certain \"kernel inverting\" matrices, are locally equal to the graphs of Lipschitzian (single-valued) mappings. We characterize the existence of Lipschitzian localizations of these multifunctions in terms of a natural condition on a generalized Jacobian mapping. One corollary to our main result is a Lipschitzian inverse mapping theorem for the broad class of \"max hypomonotone\" multifunctions. We apply our theoretical results to the sensitivity analysis of solution mappings associated with parameterized optimization problems. In particular, we obtain new characterizations of the Lipschitzian stability of stationary points and Karush-Kuhn-Tucker pairs associated with parameterized nonlinear programs." ] }
0812.0423
2951084471
We consider minimization of functions that are compositions of convex or prox-regular functions (possibly extended-valued) with smooth vector functions. A wide variety of important optimization problems fall into this framework. We describe an algorithmic framework based on a subproblem constructed from a linearized approximation to the objective and a regularization term. Properties of local solutions of this subproblem underlie both a global convergence result and an identification property of the active manifold containing the solution of the original problem. Preliminary computational results on both convex and nonconvex examples are promising.
Mifflin and Sagastiz 'abal @cite_17 describe an algorithm in which an approximate solution of the subproblem prox.gen is obtained, again for the case of a convex objective, by making use of a piecewise linear underapproximation to their objective @math . The approach is most suitable for a bundle method in which the piecewise-linear approximation is constructed from subgradients gathered at previous iterations. Approximations to the manifold of smoothness for @math are constructed from the solution of this approximate proximal point calculation, and a Newton-like step for the Lagrangian is taken along this manifold, as envisioned in earlier methods. Daniilidis, Hare, and Malick @cite_34 use the terminology predictor-corrector'' to describe algorithms of this type. Their predictor'' step is the step along the manifold of smoothness for @math , while the corrector'' step ) eventually returns the iterates to the correct active manifold (see [Theorem 28] DanHM06 ). Miller and Malick @cite_4 show how algorithms of this type are related to Newton-like methods that have been proposed earlier in various contexts.
{ "cite_N": [ "@cite_34", "@cite_4", "@cite_17" ], "mid": [ "1601741115", "2139313686", "2094993225" ], "abstract": [ "Basic notation.- Introduction.- Background material.- Optimality conditions.- Basic perturbation theory.- Second order analysis of the optimal value and optimal solutions.- Optimal Control.- References.", "This paper studies Newton-type methods for minimization of partly smooth convex functions. Sequential Newton methods are provided using local parameterizations obtained from **-Lagrangian theory and from Riemannian geometry. The Hessian based on the **-Lagrangian depends on the selection of a dual parameter g; by revealing the connection to Riemannian geometry, a natural choice of g emerges for which the two Newton directions coincide. This choice of g is also shown to be related to the least-squares multiplier estimate from a sequential quadratic programming (SQP) approach, and with this multiplier, SQP gives the same search direction as the Newton methods.", "For convex minimization we introduce an algorithm based on **-space decomposition. The method uses a bundle subroutine to generate a sequence of approximate proximal points. When a primal-dual track leading to a solution and zero subgradient pair exists, these points approximate the primal track points and give the algorithm's **, or corrector, steps. The subroutine also approximates dual track points that are **-gradients needed for the method's **-Newton predictor steps. With the inclusion of a simple line search the resulting algorithm is proved to be globally convergent. The convergence is superlinear if the primal-dual track points and the objective's **-Hessian are approximated well enough." ] }
0812.0893
2762701907
We provide linear-time algorithms for geometric graphs with sublinearly many edge crossings. That is, we provide algorithms running in @math time on connected geometric graphs having @math vertices and @math pairwise crossings, where @math is smaller than @math by an iterated logarithmic factor. Specific problems that we study include Voronoi diagrams and single-source shortest paths. Our algorithms all run in linear time in the standard comparison-based computational model; hence, we make no assumptions about the distribution or bit complexities of edge weights, nor do we utilize unusual bit-level operations on memory words. Instead, our algorithms are based on a planarization method that “zeros in” on edge crossings, together with methods for applying planar separator decompositions to geometric graphs with sublinearly many crossings. Incidentally, our planarization algorithm also solves an open computational geometry problem of Chazelle for triangulating a self-intersecting polygonal chain having @math segments and @math crossings in linear time, for the case when @math is sublinear in @math by an iterated logarithmic factor.
In the algorithms community, there has been considerable prior work on shortest path algorithms for Euclidean graphs (e.g., see @cite_35 @cite_50 @cite_48 @cite_1 @cite_8 @cite_20 ), which are geometric graphs where edges are weighted by the lengths of the corresponding line segments. This prior work takes a decidedly different approach than we take in this paper, however, in that it focuses on using special properties of the edge weights that do not hold in the comparison model, whereas we study road networks as geometric graphs with a sublinear number of edge crossings and we desire linear-time algorithms that hold in the comparison model.
{ "cite_N": [ "@cite_35", "@cite_8", "@cite_48", "@cite_1", "@cite_50", "@cite_20" ], "mid": [ "2014889099", "2077383738", "2003921147", "1493214567", "2166542580", "2100586428" ], "abstract": [ "We propose shortest path algorithms that use A* search in combination with a new graph-theoretic lower-bounding technique based on landmarks and the triangle inequality. Our algorithms compute optimal shortest paths and work on any directed graph. We give experimental results showing that the most efficient of our new algorithms outperforms previous algorithms, in particular A* search with Euclidean bounds, by a wide margin on road networks and on some synthetic problem families.", "", "The computation of shortest paths between different locations on a road network appears to be a key problem in many applications. Often, a shortest path is required in a very short time. In this article, we try to find an answer to the question of which shortest path algorithm for the one-to-one shortest path problem runs fastest on a large real-road network. An extensive computational study is presented, in which six existing algorithms and a new label correcting algorithm are implemented in several variants and compared on the real-road network of The Netherlands. In total, 168 versions are implemented, of which 18 versions are variants of the new algorithm and 60 versions are new by the application of bidirectional search. In the first part of the article we present a mathematical framework and a review of existing algorithms. We then describe combinations of existing algorithms with bidirectional search and heuristic-estimate techniques based on Euclidean distance and landmarks. We also present some useful static reduction techniques. In the final part of the article we present results from computational tests on The Netherlands road network. The new algorithm, which combines concepts from previous work on buckets and label-correcting techniques, has generally the shortest running times of any of the tested algorithms. © 2006 Wiley Periodicals, Inc. NETWORKS, Vol. 48(4), 182–194 2006This research is part of a Ph.D. project of the second author and a Master's project of the first author, at Delft University of Technology, Department of Electrical Engineering, Mathematics and Computer Science.", "We present a new speedup technique for route planning that exploits the hierarchy inherent in real world road networks. Our algorithm preprocesses the eight digit number of nodes needed for maps of the USA or Western Europe in a few hours using linear space. Shortest (i.e. fastest) path queries then take around eight milliseconds to produce exact shortest paths. This is about 2 000 times faster than using Dijkstra’s algorithm.", "In practice, computing a shortest path from one node to another in a directed graph is a very common task. This problem is classically solved by Dijkstra's algorithm. Many techniques are known to speed up this algorithm heuristically, while optimality of the solution can still be guaranteed. In most studies, such techniques are considered individually. The focus of our work is combination of speed-up techniques for Dijkstra's algorithm. We consider all possible combinations of four known techniques, namely, goal-directed search, bidirectional search, multilevel approach, and shortest-path containers, and show how these can be implemented. In an extensive experimental study, we compare the performance of the various combinations and analyze how the techniques harmonize when jointly applied. Several real-world graphs from road maps and public transport and three types of generated random graphs are taken into account.", "The classic problem of finding the shortest path over a network has been the target of many research efforts over the years. These research efforts have resulted in a number of different algorithms and a considerable amount of empirical findings with respect to performance. Unfortunately, prior research does not provide a clear direction for choosing an algorithm when one faces the problem of computing shortest paths on real road networks. Most of the computational testing on shortest path algorithms has been based on randomly generated networks, which may not have the characteristics of real road networks. In this paper, we provide an objective evaluation of 15 shortest path algorithms using a variety of real road networks. Based on the evaluation, a set of recommended algorithms for computing shortest paths on real road networks is identified. This evaluation should be particularly useful to researchers and practitioners in operations research, management science, transportation, and Geographic Information Systems." ] }
0812.0893
2762701907
We provide linear-time algorithms for geometric graphs with sublinearly many edge crossings. That is, we provide algorithms running in @math time on connected geometric graphs having @math vertices and @math pairwise crossings, where @math is smaller than @math by an iterated logarithmic factor. Specific problems that we study include Voronoi diagrams and single-source shortest paths. Our algorithms all run in linear time in the standard comparison-based computational model; hence, we make no assumptions about the distribution or bit complexities of edge weights, nor do we utilize unusual bit-level operations on memory words. Instead, our algorithms are based on a planarization method that “zeros in” on edge crossings, together with methods for applying planar separator decompositions to geometric graphs with sublinearly many crossings. Incidentally, our planarization algorithm also solves an open computational geometry problem of Chazelle for triangulating a self-intersecting polygonal chain having @math segments and @math crossings in linear time, for the case when @math is sublinear in @math by an iterated logarithmic factor.
The specific problems for which we provide linear-time algorithms are well known in the general algorithms and computational geometry literatures. For general graphs with @math vertices and @math edges, excellent work can be found on efficient algorithms in the comparison model, including single-source shortest paths @cite_23 @cite_44 @cite_26 , which can be found in @math time @cite_22 , and Voronoi diagrams @cite_5 @cite_29 , whose graph-theoretic version can be constructed in @math time @cite_6 @cite_24 . None of these algorithms run in linear time, even for planar graphs. Linear-time algorithms for planar graphs are known for single-source shortest paths @cite_27 , but these unfortunately do not immediately translate into linear-time algorithms for non-planar geometric graphs. In addition, there are a number of efficient shortest-path algorithms that make assumptions about edge weights @cite_46 @cite_35 @cite_51 @cite_47 ; hence, they are not applicable in the comparison model.
{ "cite_N": [ "@cite_35", "@cite_47", "@cite_26", "@cite_22", "@cite_29", "@cite_6", "@cite_44", "@cite_24", "@cite_27", "@cite_23", "@cite_5", "@cite_46", "@cite_51" ], "mid": [ "2014889099", "1999545213", "2083669067", "2084224084", "", "2000879295", "1595295153", "1990317614", "2762935521", "2752885492", "1967005434", "2008034501", "2095352457" ], "abstract": [ "We propose shortest path algorithms that use A* search in combination with a new graph-theoretic lower-bounding technique based on landmarks and the triangle inequality. Our algorithms compute optimal shortest paths and work on any directed graph. We give experimental results showing that the most efficient of our new algorithms outperforms previous algorithms, in particular A* search with Euclidean bounds, by a wide margin on road networks and on some synthetic problem families.", "The single-source shortest paths problem (SSSP) is one of the classic problems in algorithmic graph theory: given a positively weighted graph G with a source vertex s , find the shortest path from s to all other vertices in the graph. Since 1959, all theoretical developments in SSSP for general directed and undirected graphs have been based on Dijkstra's algorithm, visiting the vertices in order of increasing distance from s . Thus, any implementation of Dijkstra's algorithm sorts the vertices according to their distances from s . However, we do not know how to sort in linear time. Here, a deterministic linear time and linear space algorithm is presented for the undirected single source shortest paths problem with positive integer weights. The algorithm avoids the sorting bottleneck by building a hierarchical bucketing structure, identifying vertex pairs that may be visited in any order.", "We summarize the currently best known theoretical results for the single-source shortest paths problem for directed graphs with non-negative edge weights. We also point out that a recent result due to Cherkassky, Goldberg and Silverstein (1996) leads to even better time bounds for this problem than claimed by the authors.", "In this paper we develop a new data structure for implementing heaps (priority queues). Our structure, Fibonacci heaps (abbreviated F-heaps ), extends the binomial queues proposed by Vuillemin and studied further by Brown. F-heaps support arbitrary deletion from an n -item heap in O (log n ) amortized time and all other standard heap operations in O (1) amortized time. Using F-heaps we are able to obtain improved running times for several network optimization algorithms. In particular, we obtain the following worst-case bounds, where n is the number of vertices and m the number of edges in the problem graph: O ( n log n + m ) for the single-source shortest path problem with nonnegative edge lengths, improved from O ( m log ( m n +2) n ); O ( n 2 log n + nm ) for the all-pairs shortest path problem, improved from O ( nm log ( m n +2) n ); O ( n 2 log n + nm ) for the assignment problem (weighted bipartite matching), improved from O ( nm log ( m n +2) n ); O ( mβ ( m, n )) for the minimum spanning tree problem, improved from O ( m log log ( m n +2) n ); where β ( m, n ) = min i u log ( i ) n ≤ m n . Note that β ( m, n ) ≤ log * n if m ≥ n . Of these results, the improved bound for minimum spanning trees is the most striking, although all the results give asymptotic improvements for graphs of appropriate densities.", "", "The Voronoi diagram is a famous structure of computational geometry. We show that there is a straightforward equivalent in graph theory which can be efficiently computed. In particular, we give two algorithms for the computation of graph Voronoi diagrams, prove a lower bound on the problem, and identify cases where the algorithms presented are optimal. The space requirement of a graph Voronoi diagram is modest, since it needs no more space than does the graph itself. The investigation of graph Voronoi diagrams is motivated by many applications and problems on networks that can be easily solved with their help. This includes the computation of nearest facilities, all nearest neighbors and closest pairs, some kind of collision free moving, and anticenters and closest points. © 2000 John Wiley & Sons, Inc.", "PART I: FUNDAMENTAL TOOLS. Algorithm Analysis. Basic Data Structures. Search Trees and Skip Lists. Sorting, Sets, and Selection. Fundamental Techniques. PART II: GRAPH ALGORITHMS. Graphs. Weighted Graphs. Network Flow and Matching. PART III: INTERNET ALGORITHMICS. Text Processing. Number Theory and Cryptograhy. Network Algorithms. PART IV: ADDITIONAL TOPICS. Computational Geometry. NP-Completeness. Algorithmic Frameworks. Appendix: Useful Mathematical Facts. Bibliography. Index.", "Abstract We present a new implementation of the Kou, Markowsky and Berman algorithm for finding a Steiner tree for a connected, undirected distance graph with a specified subset S of the set of vertices V . The total distance of all edges of this Steiner tree is at most 2(1-1 l ) times that of a Steiner minimal tree, where l is the minimum number of leaves in any Steiner minimal tree for the given graph. The algorithm runs in O(| E |+| V |log| V |) time in the worst case, where E is the set of all edges and V the set of all vertices in the graph.", "We give a linear-time algorithm for single-source shortest paths in planar graphs with nonnegative edge-lengths. Our algorithm also yields a linear-time algorithm for maximum flow in a planar graph with the source and sink on the same face. For the case where negative edge-lengths are allowed, we give an algorithm requiringO(n4 3log(nL)) time, whereLis the absolute value of the most negative length. This algorithm can be used to obtain similar bounds for computing a feasible flow in a planar network, for finding a perfect matching in a planar bipartite graph, and for finding a maximum flow in a planar graph when the source and sink are not on the same face. We also give parallel and dynamic versions of these algorithms.", "From the Publisher: The updated new edition of the classic Introduction to Algorithms is intended primarily for use in undergraduate or graduate courses in algorithms or data structures. Like the first edition,this text can also be used for self-study by technical professionals since it discusses engineering issues in algorithm design as well as the mathematical aspects. In its new edition,Introduction to Algorithms continues to provide a comprehensive introduction to the modern study of algorithms. The revision has been updated to reflect changes in the years since the book's original publication. New chapters on the role of algorithms in computing and on probabilistic analysis and randomized algorithms have been included. Sections throughout the book have been rewritten for increased clarity,and material has been added wherever a fuller explanation has seemed useful or new information warrants expanded coverage. As in the classic first edition,this new edition of Introduction to Algorithms presents a rich variety of algorithms and covers them in considerable depth while making their design and analysis accessible to all levels of readers. Further,the algorithms are presented in pseudocode to make the book easily accessible to students from all programming language backgrounds. Each chapter presents an algorithm,a design technique,an application area,or a related topic. The chapters are not dependent on one another,so the instructor can organize his or her use of the book in the way that best suits the course's needs. Additionally,the new edition offers a 25 increase over the first edition in the number of problems,giving the book 155 problems and over 900 exercises thatreinforcethe concepts the students are learning.", "", "Abstract : We give an O((square root of n)m log N) algorithm for the single-source shortest paths problem with integral arc lengths. (Here n and m is the number of nodes and arcs in the input network and N is essentially the absolute value of the most negative arc length.) This improves previous bounds for the problem.", "The quest for a linear-time single-source shortest-path (SSSP) algorithm on directed graphs with positive edge weights is an ongoing hot research topic. While Thorup recently found an O(n + m) time RAM algorithm for undirected graphs with n nodes, m edges and integer edge weights in 0,…,2w - 1 where w denotes the word length, the currently best time bound for directed sparse graphs on a RAM is O(n + m · log log n). In the present paper we study the average-case complexity of SSSP. We give a simple algorithm for arbitrary directed graphs with random edge weights uniformly distributed in [0, 1] and show that it needs linear time O(n + m) with high probability." ] }
0812.0893
2762701907
We provide linear-time algorithms for geometric graphs with sublinearly many edge crossings. That is, we provide algorithms running in @math time on connected geometric graphs having @math vertices and @math pairwise crossings, where @math is smaller than @math by an iterated logarithmic factor. Specific problems that we study include Voronoi diagrams and single-source shortest paths. Our algorithms all run in linear time in the standard comparison-based computational model; hence, we make no assumptions about the distribution or bit complexities of edge weights, nor do we utilize unusual bit-level operations on memory words. Instead, our algorithms are based on a planarization method that “zeros in” on edge crossings, together with methods for applying planar separator decompositions to geometric graphs with sublinearly many crossings. Incidentally, our planarization algorithm also solves an open computational geometry problem of Chazelle for triangulating a self-intersecting polygonal chain having @math segments and @math crossings in linear time, for the case when @math is sublinear in @math by an iterated logarithmic factor.
Chazelle @cite_18 shows that any simple polygon can be triangulated in @math time and that this algorithm can be extended to determine in @math time, for any polygonal chain @math , whether or not @math contains a self-intersection. In addition, Chazelle posed as an open problem whether or not one can compute the arrangement of a non-simple polygon in @math time, where @math is the number of pairwise edge crossings. Clarkson, Cole, and Tarjan @cite_7 @cite_31 answer this question in the affirmative for polygons with a super-linear number of crossings, as they give a randomized algorithm that solves this problem in @math expected time. There is, to our knowledge, no previous algorithm that solves Chazelle's open problem, however, for non-simple polygons with a sublinear number of edge crossings.
{ "cite_N": [ "@cite_18", "@cite_31", "@cite_7" ], "mid": [ "2106413026", "2106114602", "2150861432" ], "abstract": [ "We give a deterministic algorithm for triangulating a simple polygon in linear time. The basic strategy is to build a coarse approximation of a triangulation in a bottom-up phase and then use the information computed along the way to refine the triangulation in a top-down phase. The main tools used are the polygon-cutting theorem, which provides us with a balancing scheme, and the planar separator theorem, whose role is essential in the discovery of new diagonals. Only elementary data structures are required by the algorithm. In particular, no dynamic search trees, of our algorithm.", "", "We describe randomized parallel algorithms for building trapezoidal diagrams of line segments in the plane. The algorithms are designed for a CRCW PRAM. For general segments, we give an algorithm requiring optimal O(A+n log n) expected work and optimal O(log n) time, where A is the number of intersecting pairs of segments. If the segments form a simple chain, we give an algorithm requiring optimal O(n) expected work and O(log n log log n log* n) expected time, and a simpler algorithm requiring O(n log* n) expected work. The serial algorithm corresponding to the latter is among the simplest known algorithms requiring O(n log* n) expected operations. For a set of segments forming K chains, we give an algorithm requiring O(A+n log* n+K log n) expected work and O(log n log log n log* n) expected time. The parallel time bounds require the assumption that enough processors are available, with processor allocations every log n steps." ] }
0812.1237
1643135477
We propose an algorithm for simultaneously detecting and locating changepoints in a time series, and a framework for predicting the distribution of the next point in the series. The kernel of the algorithm is a system of equations that computes, for each index i, the probability that the last (most recent) change point occurred at i. We evaluate this algorithm by applying it to the change point detection problem and comparing it to the generalized likelihood ratio (GLR) algorithm. We find that our algorithm is as good as GLR, or better, over a wide range of scenarios, and that the advantage increases as the signal-to-noise ratio decreases.
In a seminal paper on the tracking problem, Chernoff and Zacks present a Bayesian estimator for the current mean of a process with abrupt changes @cite_1 . Like us, they start with an estimator that assumes there is at most one change, and then use it to generate an approximate estimator in the general case. Their algorithm makes the additional assumption that the change size is distributed normally; our algorithm does not require this assumption. Also, our algorithm generates a predictive distribution for the next value in the series, rather than an estimate of the current mean.
{ "cite_N": [ "@cite_1" ], "mid": [ "2095283167" ], "abstract": [ "Abstract : A tracking problem is considered. Observations are taken on the successive positions of an object traveling on a path, and it is desired to estimate its current position. The objective is to arrive at a simple formula which implicitly accounts for possible changes in direction and discounts observations taken before the latest change. To develop a reasonable procedure, a simpler problem is studied. Successive observations are taken on n independently and normally distributed random variables X sub 1, X sub 2, ..., X sub n with means mu sub 1, mu sub 2, ..., mu sub n and variance 1. Each mean mu sub i is equal to the preceding mean mu sub (i-1) except when an occasional change takes place. The object is to estimate the current mean mu sub n. This problem is studied from a Bayesian point of view. An 'ad hoc' estimator is described, which applies a combination of the A.M.O.C. Bayes estimator and a sequence of tests designed to locate the last time point of change. The various estimators are then compared by a Monte Carlo study of samples of size 9. This Bayesian approach seems to be more appropriate for the related problem of testing whether a change in mean has occurred. This test procedure is simpler than that used by Page. The power functions of the two procedures are compared. (Author)" ] }
0812.1237
1643135477
We propose an algorithm for simultaneously detecting and locating changepoints in a time series, and a framework for predicting the distribution of the next point in the series. The kernel of the algorithm is a system of equations that computes, for each index i, the probability that the last (most recent) change point occurred at i. We evaluate this algorithm by applying it to the change point detection problem and comparing it to the generalized likelihood ratio (GLR) algorithm. We find that our algorithm is as good as GLR, or better, over a wide range of scenarios, and that the advantage increases as the signal-to-noise ratio decreases.
The algorithm we propose can be extended to detect changes in the variance as well as the mean of a process. This kind of changepoint has received relatively little attention; one exception is recent work by Jandhyala, Fotopoulos and Hawkins @cite_10 .
{ "cite_N": [ "@cite_10" ], "mid": [ "2036835505" ], "abstract": [ "Detection of change-points in normal means is a well-studied problem. The parallel problem of detecting changes in variance has had less attention. The form of the generalized likelihood ratio test statistic has long been known, but its null distribution resisted exact analysis. In this paper, we formulate the change-point problem for a sequence of chi-square random variables. We describe a procedure that is exact for the distribution of the likelihood ratio statistic for all even degrees of freedom, and gives upper and lower bounds for odd (and also for non-integer) degrees of freedom. Both the liberal and conservative bounds for X 2 1 degrees of freedom are shown through simulation to be reasonably tight. The important problem of testing for change in the normal variance of individual observations corresponds to the X 2 1 case. The non-null case is also covered, and confidence intervals for the true change point are derived. The mehhodology is illustrated with an application to quality control in a deep level gold mine. Other applications include ambulatory monitoring of medical data and econometrics." ] }
0812.1237
1643135477
We propose an algorithm for simultaneously detecting and locating changepoints in a time series, and a framework for predicting the distribution of the next point in the series. The kernel of the algorithm is a system of equations that computes, for each index i, the probability that the last (most recent) change point occurred at i. We evaluate this algorithm by applying it to the change point detection problem and comparing it to the generalized likelihood ratio (GLR) algorithm. We find that our algorithm is as good as GLR, or better, over a wide range of scenarios, and that the advantage increases as the signal-to-noise ratio decreases.
Most recently, Vellekoop and Clark propose a nonlinear filtering approach to the changepoint detection problem (but not estimation or tracking) @cite_0 .
{ "cite_N": [ "@cite_0" ], "mid": [ "2078450032" ], "abstract": [ "A benchmark change detection problem is considered which involves the detection of a change of unknown size at an unknown time. Both unknown quantities are modeled by stochastic variables, which allows the problem to be formulated within a Bayesian framework. It turns out that the resulting nonlinear filtering problem is much harder than the well-known detection problem for known sizes of the change, and in particular that it can no longer be solved in a recursive manner. An approximating recursive filter is therefore proposed, which is designed using differential-geometric methods in a suitably chosen space of unnormalized probability densities. The new nonlinear filter can be interpreted as an adaptive version of the celebrated Shiryayev--Wonham equation for the detection of a priori known changes, combined with a modified Kalman filter structure to generate estimates of the unknown size of the change. This intuitively appealing interpretation of the nonlinear filter and its excellent performance in simulation studies indicates that it may be of practical use in realistic change detection problems." ] }
0812.1237
1643135477
We propose an algorithm for simultaneously detecting and locating changepoints in a time series, and a framework for predicting the distribution of the next point in the series. The kernel of the algorithm is a system of equations that computes, for each index i, the probability that the last (most recent) change point occurred at i. We evaluate this algorithm by applying it to the change point detection problem and comparing it to the generalized likelihood ratio (GLR) algorithm. We find that our algorithm is as good as GLR, or better, over a wide range of scenarios, and that the advantage increases as the signal-to-noise ratio decreases.
We are aware of a few examples where these techniques have been applied to network measurements. Bla z explore the use of change-point algorithms to detect denial of service attacks @cite_3 . Similarly Deshpande, Thottan and Sikdar use non-parametric CUSUM to detect BGP instabilities @cite_4 .
{ "cite_N": [ "@cite_4", "@cite_3" ], "mid": [ "2146697465", "302993883" ], "abstract": [ "The increasing incidence of worm attacks in the Internet and the resulting instabilities in the global routing properties of the border gateway protocol (BGP) routers pose a serious threat to the connectivity and the ability of the Internet to deliver data correctly. In this paper we propose a mechanism to detect predict the onset of such instabilities which can then enable the timely execution of preventive strategies in order to minimize the damage caused by the worm. Our technique is based on online statistical methods relying on sequential change-point and persistence filter based detection algorithms. Our technique is validated using a year's worth of real traces collected from BGP routers in the Internet that we use to detect predict the global routing instabilities corresponding to the Code Red II, Nimda and SQL Slammer worms.", "In computer networks, large scale attacks in theirflnalstagescanreadilybeidentifledbyobservingvery abruptchangesinthenetworktra-c,butintheearlystage of an attack, these changes are hard to detect and di-cult todistinguishfromusualtra-c∞uctuations. Inthispaper, wedevelope-cientadaptivesequentialandbatch-sequential methods for an early detection of attacks from the class of of service attacks\". These methods employ statis- tical analysis of data from multiple layers of the network protocol for detection of very subtle tra-c changes, which are typical for these kinds of attacks. Both the sequential and batch-sequential algorithms utilize thresholding of test statistics to achieve a flxed rate of false alarms. The algo- rithmsaredevelopedonthebasisofthechange-pointdetec- tiontheory: todetectachangeinstatisticalmodelsassoon as possible, controlling the rate of false alarms. There are threeattractivefeaturesoftheapproach. First,bothmeth- odsareself-learning,whichenablesthemtoadapttovarious network loads and usage patterns. Second, they allow for detecting attacks with small average delay for a given false alarm rate. Third, they are computationally simple, and hence,canbeimplementedonline. Theoreticalframeworks for both kinds of detection procedures, as well as results of simulations, are presented." ] }
0812.1237
1643135477
We propose an algorithm for simultaneously detecting and locating changepoints in a time series, and a framework for predicting the distribution of the next point in the series. The kernel of the algorithm is a system of equations that computes, for each index i, the probability that the last (most recent) change point occurred at i. We evaluate this algorithm by applying it to the change point detection problem and comparing it to the generalized likelihood ratio (GLR) algorithm. We find that our algorithm is as good as GLR, or better, over a wide range of scenarios, and that the advantage increases as the signal-to-noise ratio decreases.
In the context of large databases, Kifer, Ben-David and Gehrke propose an algorithm for detecting changes in data streams @cite_14 . It is based on a two-window paradigm, in which the distribution of values in the current window is compared to the distribution of values in a past reference window. This approach is appropriate when the number of points between changepoints is large and alarm delay is not a critical metric.
{ "cite_N": [ "@cite_14" ], "mid": [ "2120587290" ], "abstract": [ "Detecting changes in a data stream is an important area of research with many applications. In this paper, we present a novel method for the detection and estimation of change. In addition to providing statistical guarantees on the reliability of detected changes, our method also provides meaningful descriptions and quantification of these changes. Our approach assumes that the points in the stream are independently generated, but otherwise makes no assumptions on the nature of the generating distribution. Thus our techniques work for both continuous and discrete data. In an experimental study we demonstrate the power of our techniques." ] }
0811.4139
2949367021
Algebraic codes that achieve list decoding capacity were recently constructed by a careful folding'' of the Reed-Solomon code. The low-degree'' nature of this folding operation was crucial to the list decoding algorithm. We show how such folding schemes conducive to list decoding arise out of the Artin-Frobenius automorphism at primes in Galois extensions. Using this approach, we construct new folded algebraic-geometric codes for list decoding based on cyclotomic function fields with a cyclic Galois group. Such function fields are obtained by adjoining torsion points of the Carlitz action of an irreducible @math . The Reed-Solomon case corresponds to the simplest such extension (corresponding to the case @math ). In the general case, we need to descend to the fixed field of a suitable Galois subgroup in order to ensure the existence of many degree one places that can be used for encoding. Our methods shed new light on algebraic codes and their list decoding, and lead to new codes achieving list decoding capacity. Quantitatively, these codes provide list decoding (and list recovery soft decoding) guarantees similar to folded Reed-Solomon codes but with an alphabet size that is only polylogarithmic in the block length. In comparison, for folded RS codes, the alphabet size is a large polynomial in the block length. This has applications to fully explicit (with no brute-force search) binary concatenated codes for list decoding up to the Zyablov radius.
Independent of our work, Huang and Narayanan @cite_20 also consider AG codes constructed from Galois extensions, and observe how automorphisms of large order can be used for folding such codes. To our knowledge, the only instantiation of this approach that improves on folded RS codes is the one based on cyclotomic function fields from our work. As an alternate approach, they also propose a decoding method that works with folding via automorphisms of small order. This involves computing several coefficients of the power series expansion of the message function at a low-degree place. Unfortunately, piecing together these coefficients into a function could lead to an exponential list size bound. The authors suggest a heuristic assumption under which they can show that for a random received word, the expected list size and running time are polynomially bounded.
{ "cite_N": [ "@cite_20" ], "mid": [ "1637999745" ], "abstract": [ "We describe a new class of list decodable codes based on Galois extensions of function fields and present a list decoding algorithm. These codes are obtained as a result of folding the set of rational places of a function field using certain elements (automorphisms) from the Galois group of the extension. This work is an extension of Folded Reed Solomon codes to the setting of Algebraic Geometric codes. We describe two constructions based on this framework depending on if the order of the automorphism used to fold the code is large or small compared to the block length. When the automorphism is of large order, the codes have polynomially bounded list size in the worst case. This construction gives codes of rate @math over an alphabet of size independent of block length that can correct a fraction of @math errors subject to the existence of asymptotically good towers of function fields with large automorphisms. The second construction addresses the case when the order of the element used to fold is small compared to the block length. In this case a heuristic analysis shows that for a random received word, the expected list size and the running time of the decoding algorithm are bounded by a polynomial in the block length. When applied to the Garcia-Stichtenoth tower, this yields codes of rate @math over an alphabet of size @math , that can correct a fraction of @math errors." ] }
0811.4413
2950265833
Hidden Markov Models (HMMs) are one of the most fundamental and widely used statistical tools for modeling discrete time series. In general, learning HMMs from data is computationally hard (under cryptographic assumptions), and practitioners typically resort to search heuristics which suffer from the usual local optima issues. We prove that under a natural separation condition (bounds on the smallest singular value of the HMM parameters), there is an efficient and provably correct algorithm for learning HMMs. The sample complexity of the algorithm does not explicitly depend on the number of distinct (discrete) observations---it implicitly depends on this quantity through spectral properties of the underlying HMM. This makes the algorithm particularly applicable to settings with a large number of observations, such as those in natural language processing where the space of observation is sometimes the words in a language. The algorithm is also simple, employing only a singular value decomposition and matrix multiplications.
The second idea is that we can represent the probability of sequences as products of matrix operators, as in the literature on multiplicity automata (see for discussion of this relationship). This idea was re-used in both the Observable Operator Model of @cite_3 and the Predictive State Representations of @cite_0 , both of which are closely related and both of which can model HMMs. In fact, the former work by @cite_3 provides a non-iterative algorithm for learning HMMs, with an asymptotic analysis. However, this algorithm assumed knowing a set of characteristic events', which is a rather strong assumption that effectively reveals some relationship between the hidden states and observations. In our algorithm, this problem is avoided through the first idea.
{ "cite_N": [ "@cite_0", "@cite_3" ], "mid": [ "1539249135", "2149960632" ], "abstract": [ "Planning and learning in Partially Observable MDPs (POMDPs) are among the most challenging tasks in both the AI and Operation Research communities. Although solutions to these problems are intractable in general, there might be special cases, such as structured POMDPs, which can be solved efficiently. A natural and possibly efficient way to represent a POMDP is through the predictive state representation (PSR) — a representation which recently has been receiving increasing attention. In this work, we relate POMDPs to multiplicity automata — showing that POMDPs can be represented by multiplicity automata with no increase in the representation size. Furthermore, we show that the size of the multiplicity automaton is equal to the rank of the predictive state representation. Therefore, we relate both the predictive state representation and POMDPs to the well-founded multiplicity automata literature. Based on the multiplicity automata representation, we provide a planning algorithm which is exponential only in the multiplicity automata rank rather than the number of states of the POMDP. As a result, whenever the predictive state representation is logarithmic in the standard POMDP representation, our planning algorithm is efficient.", "A widely used class of models for stochastic systems is hidden Markov models. Systems that can be modeled by hidden Markov models are a proper subclass of linearly dependent processes, a class of stochastic systems known from mathematical investigations carried out over the past four decades. This article provides a novel, simple characterization of linearly dependent processes, called observable operator models. The mathematical properties of observable operator models lead to a constructive learning algorithm for the identification of linearly dependent processes. The core of the algorithm has a time complexity of O (N + nm3), where N is the size of training data, n is the number of distinguishable outcomes of observations, and m is model state-space dimension." ] }
0811.4672
2953306525
In many emerging applications, data streams are monitored in a network environment. Due to limited communication bandwidth and other resource constraints, a critical and practical demand is to online compress data streams continuously with quality guarantee. Although many data compression and digital signal processing methods have been developed to reduce data volume, their super-linear time and more-than-constant space complexity prevents them from being applied directly on data streams, particularly over resource-constrained sensor networks. In this paper, we tackle the problem of online quality guaranteed compression of data streams using fast linear approximation (i.e., using line segments to approximate a time series). Technically, we address two versions of the problem which explore quality guarantees in different forms. We develop online algorithms with linear time complexity and constant cost in space. Our algorithms are optimal in the sense they generate the minimum number of segments that approximate a time series with the required quality guarantee. To meet the resource constraints in sensor networks, we also develop a fast algorithm which creates connecting segments with very simple computation. The low cost nature of our methods leads to a unique edge on the applications of massive and fast streaming environment, low bandwidth networks, and heavily constrained nodes in computational power. We implement and evaluate our methods in the application of an acoustic wireless sensor network.
@cite_7 , the authors use PLA to estimate a time series. But the authors put unnecessary constraints on the algorithm, which requires the endpoints come from the original dataset. On the whole, their algorithm can run in @math time complexity and takes @math space complexity.
{ "cite_N": [ "@cite_7" ], "mid": [ "2134238238" ], "abstract": [ "Limited energy supply is one of the major constraints in wireless sensor networks. A feasible strategy is to aggressively reduce the spatial sampling rate of sensors, that is, the density of the measure points in a field. By properly scheduling, we want to retain the high fidelity of data collection. In this paper, we propose a data collection method that is based on a careful analysis of the surveillance data reported by the sensors. By exploring the spatial correlation of sensing data, we dynamically partition the sensor nodes into clusters so that the sensors in the same cluster have similar surveillance time series. They can share the workload of data collection in the future since their future readings may likely be similar. Furthermore, during a short-time period, a sensor may report similar readings. Such a correlation in the data reported from the same sensor is called temporal correlation, which can be explored to further save energy. We develop a generic framework to address several important technical challenges, including how to partition the sensors into clusters, how to dynamically maintain the clusters in response to environmental changes, how to schedule the sensors in a cluster, how to explore temporal correlation, and how to restore the data in the sink with high fidelity. We conduct an extensive empirical study to test our method using both a real test bed system and a large-scale synthetic data set." ] }
0811.4672
2953306525
In many emerging applications, data streams are monitored in a network environment. Due to limited communication bandwidth and other resource constraints, a critical and practical demand is to online compress data streams continuously with quality guarantee. Although many data compression and digital signal processing methods have been developed to reduce data volume, their super-linear time and more-than-constant space complexity prevents them from being applied directly on data streams, particularly over resource-constrained sensor networks. In this paper, we tackle the problem of online quality guaranteed compression of data streams using fast linear approximation (i.e., using line segments to approximate a time series). Technically, we address two versions of the problem which explore quality guarantees in different forms. We develop online algorithms with linear time complexity and constant cost in space. Our algorithms are optimal in the sense they generate the minimum number of segments that approximate a time series with the required quality guarantee. To meet the resource constraints in sensor networks, we also develop a fast algorithm which creates connecting segments with very simple computation. The low cost nature of our methods leads to a unique edge on the applications of massive and fast streaming environment, low bandwidth networks, and heavily constrained nodes in computational power. We implement and evaluate our methods in the application of an acoustic wireless sensor network.
@cite_17 , give a comprehensive review on the existing techniques for segmenting time series. They categorize the solutions into three different groups, namely sliding window methods, top-down methods, and bottom-up methods. They then take advantage of both sliding window and bottom-up methods and design a Sliding-Window-And-Bottom-up (SWAB) algorithm. The SWAB algorithm uses a moving window to constrain a time period in consideration.
{ "cite_N": [ "@cite_17" ], "mid": [ "2107633943" ], "abstract": [ "In recent years, there has been an explosion of interest in mining time-series databases. As with most computer science problems, representation of the data is the key to efficient and effective solutions. One of the most commonly used representations is piecewise linear approximation. This representation has been used by various researchers to support clustering, classification, indexing and association rule mining of time-series data. A variety of algorithms have been proposed to obtain this representation, with several algorithms having been independently rediscovered several times. In this paper, we undertake the first extensive review and empirical comparison of all proposed techniques. We show that all these algorithms have fatal flaws from a data-mining perspective. We introduce a novel algorithm that we empirically show to be superior to all others in the literature." ] }
0811.4672
2953306525
In many emerging applications, data streams are monitored in a network environment. Due to limited communication bandwidth and other resource constraints, a critical and practical demand is to online compress data streams continuously with quality guarantee. Although many data compression and digital signal processing methods have been developed to reduce data volume, their super-linear time and more-than-constant space complexity prevents them from being applied directly on data streams, particularly over resource-constrained sensor networks. In this paper, we tackle the problem of online quality guaranteed compression of data streams using fast linear approximation (i.e., using line segments to approximate a time series). Technically, we address two versions of the problem which explore quality guarantees in different forms. We develop online algorithms with linear time complexity and constant cost in space. Our algorithms are optimal in the sense they generate the minimum number of segments that approximate a time series with the required quality guarantee. To meet the resource constraints in sensor networks, we also develop a fast algorithm which creates connecting segments with very simple computation. The low cost nature of our methods leads to a unique edge on the applications of massive and fast streaming environment, low bandwidth networks, and heavily constrained nodes in computational power. We implement and evaluate our methods in the application of an acoustic wireless sensor network.
@cite_0 , an amnesic function is introduced to give weights to different points in the time series. The PLA-SegmentBound problem is discussed in the context of Unrestricted Window with Absolute Amnesic (UAA) problem, but complete solutions to this problem are not provided in @cite_0 .
{ "cite_N": [ "@cite_0" ], "mid": [ "2115366404" ], "abstract": [ "The past decade has seen a wealth of research on time series representations, because the manipulation, storage, and indexing of large volumes of raw time series data is impractical. The vast majority of research has concentrated on representations that are calculated in batch mode and represent each value with approximately equal fidelity. However, the increasing deployment of mobile devices and real time sensors has brought home the need for representations that can be incrementally updated, and can approximate the data with fidelity proportional to its age. The latter property allows us to answer queries about the recent past with greater precision, since in many domains recent information is more useful than older information. We call such representations amnesic. While there has been previous work on amnesic representations, the class of amnesic functions possible was dictated by the representation itself. We introduce a novel representation of time series that can represent arbitrary, user-specified amnesic functions. For example, a meteorologist may decide that data that is twice as old can tolerate twice as much error, and thus, specify a linear amnesic function. In contrast, an econometrist might opt for an exponential amnesic function. We propose online algorithms for our representation, and discuss their properties. Finally, we perform an extensive empirical evaluation on 40 datasets, and show that our approach can efficiently maintain a high quality amnesic approximation." ] }
0811.4672
2953306525
In many emerging applications, data streams are monitored in a network environment. Due to limited communication bandwidth and other resource constraints, a critical and practical demand is to online compress data streams continuously with quality guarantee. Although many data compression and digital signal processing methods have been developed to reduce data volume, their super-linear time and more-than-constant space complexity prevents them from being applied directly on data streams, particularly over resource-constrained sensor networks. In this paper, we tackle the problem of online quality guaranteed compression of data streams using fast linear approximation (i.e., using line segments to approximate a time series). Technically, we address two versions of the problem which explore quality guarantees in different forms. We develop online algorithms with linear time complexity and constant cost in space. Our algorithms are optimal in the sense they generate the minimum number of segments that approximate a time series with the required quality guarantee. To meet the resource constraints in sensor networks, we also develop a fast algorithm which creates connecting segments with very simple computation. The low cost nature of our methods leads to a unique edge on the applications of massive and fast streaming environment, low bandwidth networks, and heavily constrained nodes in computational power. We implement and evaluate our methods in the application of an acoustic wireless sensor network.
A solution to the PLA-PointBound problem is addressed in @cite_10 with a different definition of point error bound. The algorithm is claimed to be optimal, but the time complexity is @math where @math is the number of points in the time series. Moreover, no performance evaluation of the solution is presented in the paper.
{ "cite_N": [ "@cite_10" ], "mid": [ "1931837783" ], "abstract": [ "A new piecewise linear method is presented for the approximation of digitized curves. This method produces a sequence of consecutive line segments and has the following characteristics: (i) it approximates the digitized curve with the minimum number of line segments, (ii) the Euclidean distance between each point of the digitized curve and the line segment that approximates it, does not exceed a boundary value spl epsiv and (iii) the vertices of the produced line are not (necessarily) points of the input curve." ] }
0811.2841
2951448804
A mechanism for releasing information about a statistical database with sensitive data must resolve a trade-off between utility and privacy. Privacy can be rigorously quantified using the framework of differential privacy , which requires that a mechanism's output distribution is nearly the same whether or not a given database row is included or excluded. The goal of this paper is strong and general utility guarantees, subject to differential privacy. We pursue mechanisms that guarantee near-optimal utility to every potential user, independent of its side information (modeled as a prior distribution over query results) and preferences (modeled via a loss function). Our main result is: for each fixed count query and differential privacy level, there is a geometric mechanism @math -- a discrete variant of the simple and well-studied Laplace mechanism -- that is simultaneously expected loss-minimizing for every possible user, subject to the differential privacy constraint. This is an extremely strong utility guarantee: every potential user @math , no matter what its side information and preferences, derives as much utility from @math as from interacting with a differentially private mechanism @math that is optimally tailored to @math .
Dinur and Nissim @cite_24 showed that for a database with @math rows, answering @math randomly chosen subset count queries with @math error allows an adversary to reconstruct most of the rows of the database (a blatant privacy breach); see @cite_20 for a more robust impossibility result of the same type. Most of the differential privacy literature circumvents these impossibility results by focusing on interactive models where a mechanism supplies answers to only a sub-linear (in @math ) number of queries. Count queries (e.g. @cite_24 @cite_17 ) and more general queries (e.g. @cite_8 @cite_12 ) have been studied from this perspective.
{ "cite_N": [ "@cite_8", "@cite_24", "@cite_20", "@cite_12", "@cite_17" ], "mid": [ "2517104773", "2110868467", "2120806354", "2101771965", "44899178" ], "abstract": [ "We continue a line of research initiated in [10, 11] on privacy-preserving statistical databases. Consider a trusted server that holds a database of sensitive information. Given a query function f mapping databases to reals, the so-called true answer is the result of applying f to the database. To protect privacy, the true answer is perturbed by the addition of random noise generated according to a carefully chosen distribution, and this response, the true answer plus noise, is returned to the user. Previous work focused on the case of noisy sums, in which f = Σ i g(x i ), where x i denotes the ith row of the database and g maps database rows to [0,1]. We extend the study to general functions f, proving that privacy can be preserved by calibrating the standard deviation of the noise according to the sensitivity of the function f. Roughly speaking, this is the amount that any single argument to f can change its output. The new analysis shows that for several particular applications substantially less noise is needed than was previously understood to be the case. The first step is a very clean characterization of privacy in terms of indistinguishability of transcripts. Additionally, we obtain separation results showing the increased value of interactive sanitization mechanisms over non-interactive.", "We examine the tradeoff between privacy and usability of statistical databases. We model a statistical database by an n-bit string d 1 ,..,d n , with a query being a subset q ⊆ [n] to be answered by Σ ieq d i . Our main result is a polynomial reconstruction algorithm of data from noisy (perturbed) subset sums. Applying this reconstruction algorithm to statistical databases we show that in order to achieve privacy one has to add perturbation of magnitude (Ω√n). That is, smaller perturbation always results in a strong violation of privacy. We show that this result is tight by exemplifying access algorithms for statistical databases that preserve privacy while adding perturbation of magnitude O(√n).For time-T bounded adversaries we demonstrate a privacypreserving access algorithm whose perturbation magnitude is ≈ √T.", "This work is at theintersection of two lines of research. One line, initiated by Dinurand Nissim, investigates the price, in accuracy, of protecting privacy in a statistical database. The second, growing from an extensive literature on compressed sensing (see in particular the work of Donoho and collaborators [4,7,13,11])and explicitly connected to error-correcting codes by Candes and Tao ([4]; see also [5,3]), is in the use of linearprogramming for error correction. Our principal result is the discovery of a sharp threshhold ρ*∠ 0.239, so that if ρ In the context of privacy-preserving datamining our results say thatany privacy mechanism, interactive or non-interactive, providingreasonably accurate answers to a 0.761 fraction of randomly generated weighted subset sum queries, and arbitrary answers on the remaining 0.239 fraction, is blatantly non-private.", "We introduce a new, generic framework for private data analysis.The goal of private data analysis is to release aggregate information about a data set while protecting the privacy of the individuals whose information the data set contains.Our framework allows one to release functions f of the data withinstance-based additive noise. That is, the noise magnitude is determined not only by the function we want to release, but also bythe database itself. One of the challenges is to ensure that the noise magnitude does not leak information about the database. To address that, we calibrate the noise magnitude to the smoothsensitivity of f on the database x --- a measure of variabilityof f in the neighborhood of the instance x. The new frameworkgreatly expands the applicability of output perturbation, a technique for protecting individuals' privacy by adding a smallamount of random noise to the released statistics. To our knowledge, this is the first formal analysis of the effect of instance-basednoise in the context of data privacy. Our framework raises many interesting algorithmic questions. Namely,to apply the framework one must compute or approximate the smoothsensitivity of f on x. We show how to do this efficiently for several different functions, including the median and the cost ofthe minimum spanning tree. We also give a generic procedure based on sampling that allows one to release f(x) accurately on manydatabases x. This procedure is applicable even when no efficient algorithm for approximating smooth sensitivity of f is known orwhen f is given as a black box. We illustrate the procedure by applying it to k-SED (k-means) clustering and learning mixtures of Gaussians.", "In a recent paper Dinur and Nissim considered a statistical database in which a trusted database administrator monitors queries and introduces noise to the responses with the goal of maintaining data privacy [5]. Under a rigorous definition of breach of privacy, Dinur and Nissim proved that unless the total number of queries is sub-linear in the size of the database, a substantial amount of noise is required to avoid a breach, rendering the database almost useless." ] }
0811.2841
2951448804
A mechanism for releasing information about a statistical database with sensitive data must resolve a trade-off between utility and privacy. Privacy can be rigorously quantified using the framework of differential privacy , which requires that a mechanism's output distribution is nearly the same whether or not a given database row is included or excluded. The goal of this paper is strong and general utility guarantees, subject to differential privacy. We pursue mechanisms that guarantee near-optimal utility to every potential user, independent of its side information (modeled as a prior distribution over query results) and preferences (modeled via a loss function). Our main result is: for each fixed count query and differential privacy level, there is a geometric mechanism @math -- a discrete variant of the simple and well-studied Laplace mechanism -- that is simultaneously expected loss-minimizing for every possible user, subject to the differential privacy constraint. This is an extremely strong utility guarantee: every potential user @math , no matter what its side information and preferences, derives as much utility from @math as from interacting with a differentially private mechanism @math that is optimally tailored to @math .
@cite_1 take a different approach by restricting attention to count queries that lie in a restricted class; they obtain non-interactive mechanisms that provide simultaneous good accuracy (in terms of worst-case error) for all count queries from a class with polynomial VC dimension. @cite_11 give further results for privately learning hypotheses from a given class.
{ "cite_N": [ "@cite_1", "@cite_11" ], "mid": [ "2169570643", "2097272254" ], "abstract": [ "We demonstrate that, ignoring computational constraints, it is possible to release privacy-preserving databases that are useful for all queries over a discretized domain from any given concept class with polynomial VC-dimension. We show a new lower bound for releasing databases that are useful for halfspace queries over a continuous domain. Despite this, we give a privacy-preserving polynomial time algorithm that releases information useful for all halfspace queries, for a slightly relaxed definition of usefulness. Inspired by learning theory, we introduce a new notion of data privacy, which we call distributional privacy, and show that it is strictly stronger than the prevailing privacy notion, differential privacy.", "Learning problems form an important category of computational tasks that generalizes many of the computations researchers apply to large real-life data sets. We ask: what concept classes can be learned privately, namely, by an algorithm whose output does not depend too heavily on any one input or specific training example? More precisely, we investigate learning algorithms that satisfy differential privacy, a notion that provides strong confidentiality guarantees in the contexts where aggregate information is released about a database containing sensitive information about individuals. We present several basic results that demonstrate general feasibility of private learning and relate several models previously studied separately in the contexts of privacy and standard learning." ] }
0811.2841
2951448804
A mechanism for releasing information about a statistical database with sensitive data must resolve a trade-off between utility and privacy. Privacy can be rigorously quantified using the framework of differential privacy , which requires that a mechanism's output distribution is nearly the same whether or not a given database row is included or excluded. The goal of this paper is strong and general utility guarantees, subject to differential privacy. We pursue mechanisms that guarantee near-optimal utility to every potential user, independent of its side information (modeled as a prior distribution over query results) and preferences (modeled via a loss function). Our main result is: for each fixed count query and differential privacy level, there is a geometric mechanism @math -- a discrete variant of the simple and well-studied Laplace mechanism -- that is simultaneously expected loss-minimizing for every possible user, subject to the differential privacy constraint. This is an extremely strong utility guarantee: every potential user @math , no matter what its side information and preferences, derives as much utility from @math as from interacting with a differentially private mechanism @math that is optimally tailored to @math .
The use of abstract utility functions'' in McSherry and Talwar @cite_18 has a similar flavor to our use of loss functions, though the motivations and goals of their work and ours are unrelated. Motivated by pricing problems, McSherry and Talwar @cite_18 design differentially private mechanisms for queries that can have very different values on neighboring databases (unlike count queries); they do not consider users with side information (i.e., priors) and do not formulate a notion of mechanism optimality (simultaneous or otherwise).
{ "cite_N": [ "@cite_18" ], "mid": [ "1993116423" ], "abstract": [ "We study the role that privacy-preserving algorithms, which prevent the leakage of specific information about participants, can play in the design of mechanisms for strategic agents, which must encourage players to honestly report information. Specifically, we show that the recent notion of differential privacv, in addition to its own intrinsic virtue, can ensure that participants have limited effect on the outcome of the mechanism, and as a consequence have limited incentive to lie. More precisely, mechanisms with differential privacy are approximate dominant strategy under arbitrary player utility functions, are automatically resilient to coalitions, and easily allow repeatability. We study several special cases of the unlimited supply auction problem, providing new results for digital goods auctions, attribute auctions, and auctions with arbitrary structural constraints on the prices. As an important prelude to developing a privacy-preserving auction mechanism, we introduce and study a generalization of previous privacy work that accommodates the high sensitivity of the auction setting, where a single participant may dramatically alter the optimal fixed price, and a slight change in the offered price may take the revenue from optimal to zero." ] }
0811.3301
1974339580
The dynamic time warping (DTW) is a popular similarity measure between time series. The DTW fails to satisfy the triangle inequality and its computation requires quadratic time. Hence, to find closest neighbors quickly, we use bounding techniques. We can avoid most DTW computations with an inexpensive lower bound (LB_Keogh). We compare LB_Keogh with a tighter lower bound (LB_Improved). We find that LB_Improved-based search is faster. As an example, our approach is 2-3 times faster over random-walk and shape time series.
Beside DTW, several similarity metrics have been proposed including the directed and general Hausdorff distance, Pearson's correlation, nonlinear elastic matching distance @cite_17 , Edit distance with Real Penalty (ERP) @cite_53 , Needleman-Wunsch similarity @cite_30 , Smith-Waterman similarity @cite_19 , and SimilB @cite_27 .
{ "cite_N": [ "@cite_30", "@cite_53", "@cite_19", "@cite_27", "@cite_17" ], "mid": [ "2074231493", "113257341", "2087064593", "1993705020", "2113530345" ], "abstract": [ "A computer adaptable method for finding similarities in the amino acid sequences of two proteins has been developed. From these findings it is possible to determine whether significant homology exists between the proteins. This information is used to trace their possible evolutionary development. The maximum match is a number dependent upon the similarity of the sequences. One of its definitions is the largest number of amino acids of one protein that can be matched with those of a second protein allowing for all possible interruptions in either of the sequences. While the interruptions give rise to a very large number of comparisons, the method efficiently excludes from consideration those comparisons that cannot contribute to the maximum match. Comparisons are made from the smallest unit of significance, a pair of amino acids, one from each protein. All possible pairs are represented by a two-dimensional array, and all possible comparisons are represented by pathways through the array. For this maximum match only certain of the possible pathways must, be evaluated. A numerical value, one in this case, is assigned to every cell in the array representing like amino acids. The maximum match is the largest number that would result from summing the cell values of every", "A rolling parallel printer in which a pressure element is driven through a swiveling motion each printing cycle and a pressure segment thereof rolls off a line of type. The pressure element is connected to a mechanical linkage which minimizes the sweep of travel of the pressure element, while maintaining the pressure element sufficiently far from the type in a rest position to facilitate reading of the printed matter.", "", "A new similarity measure, called SimilB, for time series analysis, based on the cross-ΨB-energy operator (2004), is introduced. ΨB is a nonlinear measure which quantifies the interaction between two time series. Compared to Euclidean distance (ED) or the Pearson correlation coefficient (CC), SimilB includes the temporal information and relative changes of the time series using the first and second derivatives of the time series. SimilB is well suited for both nonstationary and stationary time series and particularly those presenting discontinuities. Some new properties of ΨB are presented. Particularly, we show that ΨB as similarity measure is robust to both scale and time shift. SimilB is illustrated with synthetic time series and an artificial dataset and compared to the CC and the ED measures.", "Shape matching is an important ingredient in shape retrieval, recognition and classification, alignment and registration, and approximation and simplification. This paper treats various aspects that are needed to solve shape matching problems: choosing the precise problem, selecting the properties of the similarity measure that are needed for the problem, choosing the specific similarity measure, and constructing the algorithm to compute the similarity. The focus is on methods that lie close to the field of computational geometry." ] }
0811.3301
1974339580
The dynamic time warping (DTW) is a popular similarity measure between time series. The DTW fails to satisfy the triangle inequality and its computation requires quadratic time. Hence, to find closest neighbors quickly, we use bounding techniques. We can avoid most DTW computations with an inexpensive lower bound (LB_Keogh). We compare LB_Keogh with a tighter lower bound (LB_Improved). We find that LB_Improved-based search is faster. As an example, our approach is 2-3 times faster over random-walk and shape time series.
@cite_33 have shown that retrieval under the DTW can be faster by mixing progressively finer resolution and by applying early abandoning @cite_31 to the dynamic programming computation.
{ "cite_N": [ "@cite_31", "@cite_33" ], "mid": [ "2159138228", "1968010112" ], "abstract": [ "In many applications, it is desirable to monitor a streaming time series for predefined patterns. In domains as diverse as the monitoring of space telemetry, patient intensive care data, and insect populations, where data streams at a high rate and the number of predefined patterns is large, it may be impossible for the comparison algorithm to keep up. We propose a novel technique that exploits the commonality among the predefined patterns to allow monitoring at higher bandwidths, while maintaining a guarantee of no false dismissals. Our approach is based on the widely used envelope-based lower bounding technique. Extensive experiments demonstrate that our approach achieves tremendous improvements in performance in the offline case, and significant improvements in the fastest possible arrival rate of the data stream that can be processed with guaranteed no false dismissal.", "Time-series data naturally arise in countless domains, such as meteorology, astrophysics, geology, multimedia, and economics. Similarity search is very popular, and DTW (Dynamic Time Warping) is one of the two prevailing distance measures. Although DTW incurs a heavy computation cost, it provides scaling along the time axis. In this paper, we propose FTW (Fast search method for dynamic Time Warping), which guarantees no false dismissals in similarity query processing. FTW efficiently prunes a significant number of the search cost. Experiments on real and synthetic sequence data sets reveals that FTW is significantly faster than the best existing method, up to 222 times." ] }
0810.5582
1933583770
In this paper we consider the problem of anonymizing datasets in which each individual is associated with a set of items that constitute private information about the individual. Illustrative datasets include market-basket datasets and search engine query logs. We formalize the notion of k-anonymity for set-valued data as a variant of the k-anonymity model for traditional relational datasets. We define an optimization problem that arises from this definition of anonymity and provide O(klogk) and O(1)-approximation algorithms for the same. We demonstrate applicability of our algorithms to the America Online query log dataset.
In @cite_14 the authors study the problem of anonymizing market-basket data. They propose a notion of anonymity similar to @math -anonymity where a limit is placed on the number of private items of any individual that could be known to an attacker beforehand. The authors provide generalization algorithms to achieve the anonymity requirements. For example, an item milk' in a user's basket may be generalized to dairy product' in order to protect it. In contrast, the techniques we propose consider additions and deletions to the dataset instead of generalizations. Further, we demonstrate applicability of our algorithms to search engine query log data as well where there is no obvious underlying hierarchy that can be used to generalize queries.
{ "cite_N": [ "@cite_14" ], "mid": [ "2135930857" ], "abstract": [ "We present a new class of statistical de- anonymization attacks against high-dimensional micro-data, such as individual preferences, recommendations, transaction records and so on. Our techniques are robust to perturbation in the data and tolerate some mistakes in the adversary's background knowledge. We apply our de-anonymization methodology to the Netflix Prize dataset, which contains anonymous movie ratings of 500,000 subscribers of Netflix, the world's largest online movie rental service. We demonstrate that an adversary who knows only a little bit about an individual subscriber can easily identify this subscriber's record in the dataset. Using the Internet Movie Database as the source of background knowledge, we successfully identified the Netflix records of known users, uncovering their apparent political preferences and other potentially sensitive information." ] }
0810.5582
1933583770
In this paper we consider the problem of anonymizing datasets in which each individual is associated with a set of items that constitute private information about the individual. Illustrative datasets include market-basket datasets and search engine query logs. We formalize the notion of k-anonymity for set-valued data as a variant of the k-anonymity model for traditional relational datasets. We define an optimization problem that arises from this definition of anonymity and provide O(klogk) and O(1)-approximation algorithms for the same. We demonstrate applicability of our algorithms to the America Online query log dataset.
Our @math -approximation algorithm is derived by reducing the anonymization problem to a clustering problem. Clustering techniques for achieving anonymity have also been studied in @cite_19 , however here the authors seek to minimize the maximum radius of the clustering, whereas we wish to minimize the sum of the Hamming distances of points to their cluster centers.
{ "cite_N": [ "@cite_19" ], "mid": [ "2120263102" ], "abstract": [ "Publishing data for analysis from a table containing personal records, while maintaining individual privacy, is a problem of increasing importance today. The traditional approach of de-identifying records is to remove identifying fields such as social security number, name etc. However, recent research has shown that a large fraction of the US population can be identified using non-key attributes (called quasi-identifiers) such as date of birth, gender, and zip code [15]. Sweeney [16] proposed the k-anonymity model for privacy where non-key attributes that leak information are suppressed or generalized so that, for every record in the modified table, there are at least k−1 other records having exactly the same values for quasi-identifiers. We propose a new method for anonymizing data records, where quasi-identifiers of data records are first clustered and then cluster centers are published. To ensure privacy of the data records, we impose the constraint that each cluster must contain no fewer than a pre-specified number of data records. This technique is more general since we have a much larger choice for cluster centers than k-Anonymity. In many cases, it lets us release a lot more information without compromising privacy. We also provide constant-factor approximation algorithms to come up with such a clustering. This is the first set of algorithms for the anonymization problem where the performance is independent of the anonymity parameter k. We further observe that a few outlier points can significantly increase the cost of anonymization. Hence, we extend our algorithms to allow an e fraction of points to remain unclustered, i.e., deleted from the anonymized publication. Thus, by not releasing a small fraction of the database records, we can ensure that the data published for analysis has less distortion and hence is more useful. Our approximation algorithms for new clustering objectives are of independent interest and could be applicable in other clustering scenarios as well." ] }
0810.5582
1933583770
In this paper we consider the problem of anonymizing datasets in which each individual is associated with a set of items that constitute private information about the individual. Illustrative datasets include market-basket datasets and search engine query logs. We formalize the notion of k-anonymity for set-valued data as a variant of the k-anonymity model for traditional relational datasets. We define an optimization problem that arises from this definition of anonymity and provide O(klogk) and O(1)-approximation algorithms for the same. We demonstrate applicability of our algorithms to the America Online query log dataset.
In @cite_6 the authors propose the notion of @math -coherence for anonymizing transactional data. Here once again there is a division of items into public and private items. The goal of the anonymization is to ensure that for any set of @math public items, either no transaction contains this set, or at least @math transactions contain it, and no more than @math percent of these transactions contain a common private item. The authors consider the minimal number of suppressions required to achieve these anonymity goals, however no theoretical guarantees are given.
{ "cite_N": [ "@cite_6" ], "mid": [ "2000646855" ], "abstract": [ "This paper considers the problem of publishing \"transaction data\" for research purposes. Each transaction is an arbitrary set of items chosen from a large universe. Detailed transaction data provides an electronic image of one's life. This has two implications. One, transaction data are excellent candidates for data mining research. Two, use of transaction data would raise serious concerns over individual privacy. Therefore, before transaction data is released for data mining, it must be made anonymous so that data subjects cannot be re-identified. The challenge is that transaction data has no structure and can be extremely high dimensional. Traditional anonymization methods lose too much information on such data. To date, there has been no satisfactory privacy notion and solution proposed for anonymizing transaction data. This paper proposes one way to address this issue." ] }
0810.5582
1933583770
In this paper we consider the problem of anonymizing datasets in which each individual is associated with a set of items that constitute private information about the individual. Illustrative datasets include market-basket datasets and search engine query logs. We formalize the notion of k-anonymity for set-valued data as a variant of the k-anonymity model for traditional relational datasets. We define an optimization problem that arises from this definition of anonymity and provide O(klogk) and O(1)-approximation algorithms for the same. We demonstrate applicability of our algorithms to the America Online query log dataset.
With regards to search engine query logs, there has been work on identifying privacy attacks both on users @cite_15 as well as on companies whose websites appear in query results and get clicked on @cite_3 . We do not consider the latter kind of privacy attack in this paper. @cite_15 considers an anonymization procedure wherein keywords in queries are replaced by secure hashes. The authors show that such a procedure is susceptible to statistical attacks on the hashed keywords, leading to privacy breaches. There has also been work on defending against privacy attacks on users in @cite_17 . This line of work considers heuristics such as the removal of infrequent queries and develops methods to apply such techniques on the fly as new queries are posed. In contrast, we consider a static scenario wherein a search engine would like to publicly release an existing set of query logs.
{ "cite_N": [ "@cite_15", "@cite_3", "@cite_17" ], "mid": [ "2170258874", "1607430673", "93415635" ], "abstract": [ "In this paper we study the privacy preservation properties of aspecific technique for query log anonymization: token-based hashing. In this approach, each query is tokenized, and then a secure hash function is applied to each token. We show that statistical techniques may be applied to partially compromise the anonymization. We then analyze the specific risks that arise from these partial compromises, focused on revelation of identity from unambiguous names, addresses, and so forth, and the revelation of facts associated with an identity that are deemed to be highly sensitive. Our goal in this work is two fold: to show that token-based hashing is unsuitable for anonymization, and to present a concrete analysis of specific techniques that may be effective in breaching privacy, against which other anonymization schemes should be measured.", "In this paper we study privacy preservation for the publication of search engine query logs. We introduce a new privacy concern, website privacy as a special case of business privacy.We define the possible adversaries who could be interested in disclosing website information and the vulnerabilities in the query log, which they could exploit. We elaborate on anonymization techniques to protect website information, discuss different types of attacks that an adversary could use and propose an anonymization strategy for one of these attacks. We then present a graph-based heuristic to validate the effectiveness of our anonymization method and perform an experimental evaluation of this approach. Our experimental results show that the query log can be appropriately anonymized against the specific attack, while retaining a significant volume of useful data.", "The recent release of the American Online (AOL) Query Logs highlighted the remarkable amount of private and identifying information that users are willing to reveal to a search engine. The release of these types of log files therefore represents a significant liability and compromise of user privacy. However, without such data the academic community greatly suffers in their ability to conduct research on real search engines. This paper proposes two specific solutions (rather than an overly general framework) that attempts to balance the needs of certain types of research while individual privacy. The first solution, based on a threshold cryptography system, eliminates highly identifying queries, in real time, without preserving history or statistics about previous behavior. The second solution attempts to deal with sets of queries, that when taken in aggregate, are overly identifying. Both are novel and represent additional options for data anonymization." ] }
0810.5325
2951937062
This paper addresses the problem of 3D face recognition using simultaneous sparse approximations on the sphere. The 3D face point clouds are first aligned with a novel and fully automated registration process. They are then represented as signals on the 2D sphere in order to preserve depth and geometry information. Next, we implement a dimensionality reduction process with simultaneous sparse approximations and subspace projection. It permits to represent each 3D face by only a few spherical functions that are able to capture the salient facial characteristics, and hence to preserve the discriminant facial information. We eventually perform recognition by effective matching in the reduced space, where Linear Discriminant Analysis can be further activated for improved recognition performance. The 3D face recognition algorithm is evaluated on the FRGC v.1.0 data set, where it is shown to outperform classical state-of-the-art solutions that work with depth images.
3D face recognition has attracted a lot of research efforts in the past few decades due to the advent of new sensing technologies and the high potential of 3D methods for building robust systems with invariance to head pose and illumination variations. We review in this section the most relevant work in 3D face recognition, which can be categorized in methods using point cloud representations, depth images, facial surface features or spherical representations respectively. Surveys of the state-of-the-art in 3D face recognition are further provided in @cite_12 @cite_9 .
{ "cite_N": [ "@cite_9", "@cite_12" ], "mid": [ "1517271056", "2161308290" ], "abstract": [ "Face recognition (FR) is the preferred mode of identity recognition by humans: It is natural, robust and unintrusive. However, automatic FR techniques have failed to match up to expectations: Variations in pose, illumination and expression limit the performance of 2D FR techniques. In recent years, 3D FR has shown promise to overcome these challanges. With the availability of cheaper acquisition methods, 3D face recognition can be a way out of these problems, both as a stand-alone method, or as a supplement to 2D face recognition. We review the relevant work on 3D face recognition here, and discuss merits of different representations and recognition algorithms.", "This survey focuses on recognition performed by matching models of the three-dimensional shape of the face, either alone or in combination with matching corresponding two-dimensional intensity images. Research trends to date are summarized, and challenges confronting the development of more accurate three-dimensional face recognition are identified. These challenges include the need for better sensors, improved recognition algorithms, and more rigorous experimental methodology." ] }
0810.5325
2951937062
This paper addresses the problem of 3D face recognition using simultaneous sparse approximations on the sphere. The 3D face point clouds are first aligned with a novel and fully automated registration process. They are then represented as signals on the 2D sphere in order to preserve depth and geometry information. Next, we implement a dimensionality reduction process with simultaneous sparse approximations and subspace projection. It permits to represent each 3D face by only a few spherical functions that are able to capture the salient facial characteristics, and hence to preserve the discriminant facial information. We eventually perform recognition by effective matching in the reduced space, where Linear Discriminant Analysis can be further activated for improved recognition performance. The 3D face recognition algorithm is evaluated on the FRGC v.1.0 data set, where it is shown to outperform classical state-of-the-art solutions that work with depth images.
Many recognition systems use depth or range images that permit to formulate the 3D face recognition as a problem of dimensionality reduction for planar images, where each pixel value represents the distance from the sensor to the facial surface. Principal Component Analysis (PCA) and Eigenfaces'' can be used for dimensionality reduction @cite_15 , where the basis vectors are however typically holistic and of global support. PCA can be combined with Linear Discriminant Analysis (LDA) to form Fisherfaces'' with enhanced class separability properties @cite_26 . Alternatively, dimensionality reduction can be performed via variants of non-negative matrix factorization (NMF) algorithms @cite_17 @cite_4 @cite_14 that produce part-based decompositions of the depth images. Part-based decompositions based on non-negative sparse coding @cite_19 have recently been shown to provide improved recognition performance than NMF methods in face recognition @cite_27 . Recent methods have proposed to concentrate dimensionality reduction around facial landmarks like the nose tip @cite_8 or in multiple carefully chosen regions @cite_29 or to compute geodesic distances among the selected fiducial points @cite_25 . They however require a selection of the fiducial points or areas of interest that is often performed manually and prevents the implementation of fully automatic systems.
{ "cite_N": [ "@cite_14", "@cite_26", "@cite_4", "@cite_8", "@cite_29", "@cite_19", "@cite_27", "@cite_15", "@cite_25", "@cite_17" ], "mid": [ "2118718620", "2121647436", "2059745395", "", "", "", "2025741957", "", "2122239827", "2135029798" ], "abstract": [ "Non-negative matrix factorization (NMF) is a recently developed technique for finding parts-based, linear representations of non-negative data. Although it has successfully been applied in several applications, it does not always result in parts-based representations. In this paper, we show how explicitly incorporating the notion of 'sparseness' improves the found decompositions. Additionally, we provide complete MATLAB code both for standard NMF and for our extension. Our hope is that this will further the application of these methods to solving novel data-analysis problems.", "We develop a face recognition algorithm which is insensitive to large variation in lighting direction and facial expression. Taking a pattern classification approach, we consider each pixel in an image as a coordinate in a high-dimensional space. We take advantage of the observation that the images of a particular face, under varying illumination but fixed pose, lie in a 3D linear subspace of the high dimensional image space-if the face is a Lambertian surface without shadowing. However, since faces are not truly Lambertian surfaces and do indeed produce self-shadowing, images will deviate from this linear subspace. Rather than explicitly modeling this deviation, we linearly project the image into a subspace in a manner which discounts those regions of the face with large deviation. Our projection method is based on Fisher's linear discriminant and produces well separated classes in a low-dimensional subspace, even under severe variation in lighting and facial expressions. The eigenface technique, another method based on linearly projecting the image space to a low dimensional subspace, has similar computational requirements. Yet, extensive experimental results demonstrate that the proposed \"Fisherface\" method has error rates that are lower than those of the eigenface technique for tests on the Harvard and Yale face databases.", "A new variant ‘PMF’ of factor analysis is described. It is assumed that X is a matrix of observed data and σ is the known matrix of standard deviations of elements of X. Both X and σ are of dimensions n × m. The method solves the bilinear matrix problem X = GF + E where G is the unknown left hand factor matrix (scores) of dimensions n × p, F is the unknown right hand factor matrix (loadings) of dimensions p × m, and E is the matrix of residuals. The problem is solved in the weighted least squares sense: G and F are determined so that the Frobenius norm of E divided (element-by-element) by σ is minimized. Furthermore, the solution is constrained so that all the elements of G and F are required to be non-negative. It is shown that the solutions by PMF are usually different from any solutions produced by the customary factor analysis (FA, i.e. principal component analysis (PCA) followed by rotations). Usually PMF produces a better fit to the data than FA. Also, the result of PF is guaranteed to be non-negative, while the result of FA often cannot be rotated so that all negative entries would be eliminated. Different possible application areas of the new method are briefly discussed. In environmental data, the error estimates of data can be widely varying and non-negativity is often an essential feature of the underlying models. Thus it is concluded that PMF is better suited than FA or PCA in many environmental applications. Examples of successful applications of PMF are shown in companion papers.", "", "", "", "Neural networks in the visual system may be performing sparse coding of learnt local features that are qualitatively very similar to the receptive fields of simple cells in the primary visual cortex, V1. In conventional sparse coding, the data are described as a combination of elementary features involving both additive and subtractive components. However, the fact that features can ‘cancel each other out’ using subtraction is contrary to the intuitive notion of combining parts to form a whole. Thus, it has recently been argued forcefully for completely non-negative representations. This paper presents Non-Negative Sparse Coding (NNSC) applied to the learning of facial features for face recognition and a comparison is made with the other part-based techniques, Non-negative Matrix Factorization (NMF) and Local-Non-negative Matrix Factorization (LNMF). The NNSC approach has been tested on the Aleix–Robert (AR), the Face Recognition Technology (FERET), the Yale B, and the Cambridge ORL databases, respectively. In doing so, we have compared and evaluated the proposed NNSC face recognition technique under varying expressions, varying illumination, occlusion with sunglasses, occlusion with scarf, and varying pose. Tests were performed with different distance metrics such as the L1-metric, L2-metric, and Normalized Cross-Correlation (NCC). All these experiments involved a large range of basis dimensions. In general, NNSC was found to be the best approach of the three part-based methods, although it must be observed that the best distance measure was not consistent for the different experiments.", "", "We present a systematic procedure for selecting facial fiducial points associated with diverse structural characteristics of a human face. We identify such characteristics from the existing literature on anthropometric facial proportions. We also present three dimensional (3D) face recognition algorithms, which employ Euclidean geodesic distances between these anthropometric fiducial points as features along with linear discriminant analysis classifiers. Furthermore, we show that in our algorithms, when anthropometric distances are replaced by distances between arbitrary regularly spaced facial points, their performances decrease substantially. This demonstrates that incorporating domain specific knowledge about the structural diversity of human faces significantly improves the performance of 3D human face recognition algorithms.", "Non-negative matrix factorization (NMF) has previously been shown to be a useful decomposition for multivariate data. Two different multiplicative algorithms for NMF are analyzed. They differ only slightly in the multiplicative factor used in the update rules. One algorithm can be shown to minimize the conventional least squares error while the other minimizes the generalized Kullback-Leibler divergence. The monotonic convergence of both algorithms can be proven using an auxiliary function analogous to that used for proving convergence of the Expectation-Maximization algorithm. The algorithms can also be interpreted as diagonally rescaled gradient descent, where the rescaling factor is optimally chosen to ensure convergence." ] }
0810.5325
2951937062
This paper addresses the problem of 3D face recognition using simultaneous sparse approximations on the sphere. The 3D face point clouds are first aligned with a novel and fully automated registration process. They are then represented as signals on the 2D sphere in order to preserve depth and geometry information. Next, we implement a dimensionality reduction process with simultaneous sparse approximations and subspace projection. It permits to represent each 3D face by only a few spherical functions that are able to capture the salient facial characteristics, and hence to preserve the discriminant facial information. We eventually perform recognition by effective matching in the reduced space, where Linear Discriminant Analysis can be further activated for improved recognition performance. The 3D face recognition algorithm is evaluated on the FRGC v.1.0 data set, where it is shown to outperform classical state-of-the-art solutions that work with depth images.
Finally, spherical representations have been used recently for modelling illumination variations @cite_20 @cite_18 or both illumination and pose variations in face images @cite_7 . Spherical representations permit to efficiently represent facial surfaces and overcome the limitations of other methods towards occlusions and partial views @cite_24 . To the best of our knowledge, the representation of 3D face point clouds as spherical signals for face recognition has however not been investigated yet. We therefore propose to take benefit of the robustness of spherical representations and of spherical signal processing tools to build an effective and automatic 3D face recognition system. We perform dimensionality reduction directly on the sphere, so that the geometry of 3D faces is preserved. The reduced feature space is extracted by sparse approximations with a dictionary of localized geometric features on the sphere that effectively capture spatially localized and salient 3D face features that are advantageous in the recognition process.
{ "cite_N": [ "@cite_24", "@cite_18", "@cite_7", "@cite_20" ], "mid": [ "1964475161", "2134875082", "2053955554", "2153690423" ], "abstract": [ "Introduces a new surface representation for recognizing curved objects. The authors approach begins by representing an object by a discrete mesh of points built from range data or from a geometric model of the object. The mesh is computed from the data by deforming a standard shaped mesh, for example, an ellipsoid, until it fits the surface of the object. The authors define local regularity constraints that the mesh must satisfy. The authors then define a canonical mapping between the mesh describing the object and a standard spherical mesh. A surface curvature index that is pose-invariant is stored at every node of the mesh. The authors use this object representation for recognition by comparing the spherical model of a reference object with the model extracted from a new observed scene. The authors show how the similarity between reference model and observed data can be evaluated and they show how the pose of the reference object in the observed scene can be easily computed using this representation. The authors present results on real range images which show that this approach to modelling and recognizing 3D objects has three main advantages: (1) it is applicable to complex curved surfaces that cannot be handled by conventional techniques; (2) it reduces the recognition problem to the computation of similarity between spherical distributions; in particular, the recognition algorithm does not require any combinatorial search; and (3) even though it is based on a spherical mapping, the approach can handle occlusions and partial views. >", "We analyze theoretically the subspace best approximating images of a convex Lambertian object taken from the same viewpoint, but under different distant illumination conditions. We analytically construct the principal component analysis for images of a convex Lambertian object, explicitly taking attached shadows into account, and find the principal eigenmodes and eigenvalues with respect to lighting variability. Our analysis makes use of an analytic formula for the irradiance in terms of spherical-harmonic coefficients of the illumination and shows, under appropriate assumptions, that the principal components or eigenvectors are identical to the spherical harmonic basis functions evaluated at the surface normal vectors. Our main contribution is in extending these results to the single-viewpoint case, showing how the principal eigenmodes and eigenvalues are affected when only a limited subset (the upper hemisphere) of normals is available and the spherical harmonics are no longer orthonormal over the restricted domain. Our results are very close, both qualitatively and quantitatively, to previous empirical observations and represent the first essentially complete theoretical explanation of these observations.", "Face recognition under varying pose is a challenging problem, especially when illumination variations are also present. In this paper, we propose to address one of the most challenging scenarios in face recognition. That is, to identify a subject from a test image that is acquired under dierent pose and illumination condition from only one training sample (also known as a gallery image) of this subject in the database. For example, the test image could be semifrontal and illuminated by multiple lighting sources while the corresponding training image is frontal under a single lighting source. Under the assumption of Lambertian reflectance, the spherical harmonics representation has proved to be effective in modeling illumination variations for a fixed pose. In this paper, we extend the spherical harmonics representation to encode pose information. More specifically, we utilize the fact that 2D harmonic basis images at different poses are related by close-form linear transformations, and give a more convenient transformation matrix to be directly used for basis images. An immediate application is that we can easily synthesize a different view of a subject under arbitrary lighting conditions by changing the coefficients of the spherical harmonics representation. A more important result is an efficient face recognition method, based on the orthonormality of the linear transformations, for solving the above-mentioned challenging scenario. Thus, we directly project a nonfrontal view test image onto the space of frontal view harmonic basis images. The impact of some empirical factors due to the projection is embedded in a sparse warping matrix; for most cases, we show that the recognition performance does not deteriorate after warping the test image to the frontal view. Very good recognition results are obtained using this method for both synthetic and challenging real images.", "To deal with image variations due to illumination problem, recently Ramamoorthi and Basri have independently derived a spherical harmonic analysis for the Lambertian reflectance and linear subspace. Their theoretical work provided a new approach for face representation, however both of them had the assumption that the 3D surface normal and albedo are known. This assumption limits this algorithm's application. In this paper, we present a novel method for modeling 3D face shape and albedo from only three images with unknown light directions and this work well fills the blank, which Ramamoorthi and Basri left. By taking the advantage of similar 3D shape of all human faces, the highlight of the new method is that it circumambulates the linear ambiguity by 3D alignment. The experiment results show that our estimated model can be perfectly employed to face recognition and 3D reconstruction." ] }
0810.5428
2950195527
We argue that relationships between Web pages are functions of the user's intent. We identify a class of Web tasks - information-gathering - that can be facilitated by a search engine that provides links to pages which are related to the page the user is currently viewing. We define three kinds of intentional relationships that correspond to whether the user is a) seeking sources of information, b) reading pages which provide information, or c) surfing through pages as part of an extended information-gathering process. We show that these three relationships can be productively mined using a combination of textual and link information and provide three scoring mechanisms that correspond to them: SeekRel , FactRel and SurfRel . These scoring mechanisms incorporate both textual and link information. We build a set of capacitated subnetworks - each corresponding to a particular keyword - that mirror the interconnection structure of the World Wide Web. The scores are computed by computing flows on these subnetworks. The capacities of the links are derived from the hub and authority values of the nodes they connect, following the work of Kleinberg (1998) on assigning authority to pages in hyperlinked environments. We evaluated our scoring mechanism by running experiments on four data sets taken from the Web. We present user evaluations of the relevance of the top results returned by our scoring mechanisms and compare those to the top results returned by Google's Similar Pages feature, and the Companion algorithm proposed by Dean and Henzinger (1999).
In a different use of link structure related to our own Lu et. al. @cite_30 @cite_31 claimed that two pages were said to be similar if flow could be routed from one of them to the other. However, unlike our work, their capacity assignments were not based on any notion of authority. To the best of our knowledge this is the only other mention of using flow to score similarity in the literature.
{ "cite_N": [ "@cite_30", "@cite_31" ], "mid": [ "1487148382", "2102549231" ], "abstract": [ "Networked information spaces contain information entities, corresponding to nodes, which are connected by associations, corresponding to links in the network. Examples of networked information spaces are: the World Wide Web, where information entities are web pages, and associations are hyperlinks: the scientific literature, where information entities are articles and associations are references to other articles. Similarity between information entities in a networked information space can be defined not only based on the content of the information entities, but also based on the connectivity established by the associations present. This paper explores the definition of similarity based on connectivity only, and proposes several algorithms for this purpose. Our metrics take advantage of the local neighborhoods of the nodes in the networked information space. Therefore, explicit availability of the networked information space is not required, as long as a query engine is available for following links and extracting the necessary local neighbourhoods for similarity estimation. Two variations of similarity estimation between two nodes are described, one based on the separate local neighbourhoods of the nodes, and another based on the joint local neighbourhood expanded from both nodes at the same time. The algorithms are implemented and evaluated on the citation graph of computer science. The immediate application of this work is in finding papers similar to a given paper in a digital library, but they are also applicable to other networked information spaces, such as the Web.", "Published scientific articles are linked together into a graph, the citation graph, through their citations. This paper explores the notion of similarity based on connectivity alone, and proposes several algorithms to quantify it. Our metrics take advantage of the local neighborhoods of the nodes in the citation graph. Two variants of link-based similarity estimation between two nodes are described, one based on the separate local neighborhoods of the nodes, and another based on the joint local neighborhood expanded from both nodes at the same time. The algorithms are implemented and evaluated on a subgraph of the citation graph of computer science in a retrieval context. The results are compared with text-based similarity, and demonstrate the complementarity of link-based and text-based retrieval." ] }
0810.3935
2949423092
Realistic mobility models are fundamental to evaluate the performance of protocols in mobile ad hoc networks. Unfortunately, there are no mobility models that capture the non-homogeneous behaviors in both space and time commonly found in reality, while at the same time being easy to use and analyze. Motivated by this, we propose a time-variant community mobility model, referred to as the TVC model, which realistically captures spatial and temporal correlations. We devise the communities that lead to skewed location visiting preferences, and time periods that allow us to model time dependent behaviors and periodic re-appearances of nodes at specific locations. To demonstrate the power and flexibility of the TVC model, we use it to generate synthetic traces that match the characteristics of a number of qualitatively different mobility traces, including wireless LAN traces, vehicular mobility traces, and human encounter traces. More importantly, we show that, despite the high level of realism achieved, our TVC model is still theoretically tractable. To establish this, we derive a number of important quantities related to protocol performance, such as the average node degree, the hitting time, and the meeting time, and provide examples of how to utilize this theory to guide design decisions in routing protocols.
Mobility models have been long recognized as one of the fundamental components that impacts the performance of wireless ad hoc networks. A wide variety of mobility models are available in the research community (see @cite_5 for a good survey). Among all mobility models, the popularity of random mobility models (e.g., random walk, random direction, and random waypoint) roots in its simplicity and mathematical tractability. A number of important properties for these models have been studied, such as the stationary nodal distribution @cite_43 , the hitting and meeting times @cite_31 , and the meeting duration @cite_35 . These quantities in turn enable routing protocol analysis to produce performance bounds @cite_34 @cite_3 . However, random mobility models are based on over-simplified assumptions, and as has been shown recently and we will also show in the paper, the resulting mobility characteristics are very different from real-life scenarios. Hence, it is debatable whether the findings under these models will directly translate into performance in real-world implementations of MANETs.
{ "cite_N": [ "@cite_35", "@cite_3", "@cite_43", "@cite_5", "@cite_31", "@cite_34" ], "mid": [ "1986779372", "", "1966415263", "", "2137774589", "2129849999" ], "abstract": [ "Traditional mobile ad hoc routing protocols fail to deliver any data in intermittently connected mobile ad hoc networks (ICMN's) because of the absence of complete end-to-end paths in these networks. To overcome this issue, researchers have proposed to use node mobility to carry data around the network. These schemes are referred to as mobility-assisted routing schemes. A mobility-assisted routing scheme forwards data only when appropriate relays meet each other. The time it takes for them to first meet each other is referred to as the meeting time. The time duration they remain in contact with each other is called the contact time. If they fail to exchange the packet during the contact time (due to contention in the network), then they have to wait till they meet each other again. This time duration is referred to as the inter meeting time. A realistic performance analysis of any mobility-assisted routing scheme requires a knowledge of the statistics of these three quantities. These quantities vary largely depending on the mobility model at hand. This paper studies these three quantities for the three most popularly used mobility models: random direction, random waypoint and random walk models. Hence, this work allows for a realistic performance analysis of any routing scheme under any of these three mobility models", "", "The random waypoint model is a commonly used mobility model in the simulation of ad hoc networks. It is known that the spatial distribution of network nodes moving according to this model is, in general, nonuniform. However, a closed-form expression of this distribution and an in-depth investigation is still missing. This fact impairs the accuracy of the current simulation methodology of ad hoc networks and makes it impossible to relate simulation-based performance results to corresponding analytical results. To overcome these problems, we present a detailed analytical study of the spatial node distribution generated by random waypoint mobility. More specifically, we consider a generalization of the model in which the pause time of the mobile nodes is chosen arbitrarily in each waypoint and a fraction of nodes may remain static for the entire simulation time. We show that the structure of the resulting distribution is the weighted sum of three independent components: the static, pause, and mobility component. This division enables us to understand how the model's parameters influence the distribution. We derive an exact equation of the asymptotically stationary distribution for movement on a line segment and an accurate approximation for a square area. The good quality of this approximation is validated through simulations using various settings of the mobility parameters. In summary, this article gives a fundamental understanding of the behavior of the random waypoint model.", "", "Traditionally, ad hoc networks have been viewed as a connected graph over which end-to-end routing paths had to be established.Mobility was considered a necessary evil that invalidates paths and needs to be overcome in an intelligent way to allow for seamless ommunication between nodes.However, it has recently been recognized that mobility an be turned into a useful ally, by making nodes carry data around the network instead of transmitting them. This model of routing departs from the traditional paradigm and requires new theoretical tools to model its performance. A mobility-assisted protocol forwards data only when appropriate relays encounter each other, and thus the time between such encounters, called hitting or meeting time, is of high importance.In this paper, we derive accurate closed form expressions for the expected encounter time between different nodes, under ommonly used mobility models. We also propose a mobility model that can successfully capture some important real-world mobility haracteristics, often ignored in popular mobility models, and alculate hitting times for this model as well. Finally, we integrate this results with a general theoretical framework that can be used to analyze the performance of mobility-assisted routing schemes. We demonstrate that derivative results oncerning the delay of various routing s hemes are very accurate, under all the mobility models examined. Hence, this work helps in better under-standing the performance of various approaches in different settings, and an facilitate the design of new, improved protocols.", "Intermittently connected mobile networks are wireless networks where most of the time there does not exist a complete path from the source to the destination. There are many real networks that follow this model, for example, wildlife tracking sensor networks, military networks, vehicular ad hoc networks, etc. In this context, conventional routing schemes fail, because they try to establish complete end-to-end paths, before any data is sent. To deal with such networks researchers have suggested to use flooding-based routing schemes. While flooding-based schemes have a high probability of delivery, they waste a lot of energy and suffer from severe contention which can significantly degrade their performance. Furthermore, proposed efforts to reduce the overhead of flooding-based schemes have often been plagued by large delays. With this in mind, we introduce a new family routing schemes that \"spray\" a few message copies into the network, and then route each copy independently towards the destination. We show that, if carefully designed, spray routing not only performs significantly fewer transmissions per message, but also has lower average delivery delays than existing schemes; furthermore, it is highly scalable and retains good performance under a large range of scenarios. Finally, we use our theoretical framework proposed in our 2004 paper to analyze the performance of spray routing. We also use this theory to show how to choose the number of copies to be sprayed and how to optimally distribute these copies to relays." ] }
0810.3935
2949423092
Realistic mobility models are fundamental to evaluate the performance of protocols in mobile ad hoc networks. Unfortunately, there are no mobility models that capture the non-homogeneous behaviors in both space and time commonly found in reality, while at the same time being easy to use and analyze. Motivated by this, we propose a time-variant community mobility model, referred to as the TVC model, which realistically captures spatial and temporal correlations. We devise the communities that lead to skewed location visiting preferences, and time periods that allow us to model time dependent behaviors and periodic re-appearances of nodes at specific locations. To demonstrate the power and flexibility of the TVC model, we use it to generate synthetic traces that match the characteristics of a number of qualitatively different mobility traces, including wireless LAN traces, vehicular mobility traces, and human encounter traces. More importantly, we show that, despite the high level of realism achieved, our TVC model is still theoretically tractable. To establish this, we derive a number of important quantities related to protocol performance, such as the average node degree, the hitting time, and the meeting time, and provide examples of how to utilize this theory to guide design decisions in routing protocols.
More recently, an array of synthetic mobility models are proposed to improve the realism of the simple random mobility models . More complex rules are introduced to make the nodes follow a popularity distribution when selecting the next destination @cite_38 , stay on designated paths for movements @cite_50 , or move as a group @cite_22 . These rules enrich the scenarios covered by the synthetic mobility models , but at the same time make theoretical treatment of these models difficult. In addition, most synthetic mobility models are still limited to i.i.d. models, and the mobility decisions are also independent of the current location of nodes and time of simulation.
{ "cite_N": [ "@cite_38", "@cite_22", "@cite_50" ], "mid": [ "", "2053694592", "2167627514" ], "abstract": [ "", "In this paper, we present a survey of various mobility models in both cellular networks and multi-hop networks. We show that group motion occurs frequently in ad hoc networks, and introduce a novel group mobility model Reference Point Group Mobility (RPGM) to represent the relationship among mobile hosts. RPGM can be readily applied to many existing applications. Moreover, by proper choice of parameters, RPGM can be used to model several mobility models which were previously proposed. One of the main themes of this paper is to investigate the impact of the mobility model on the performance of a specific network protocol or application. To this end, we have applied our RPGM model to two different network protocol scenarios, clustering and routing, and have evaluated network performance under different mobility patterns and for different protocol implementations. As expected, the results indicate that different mobility patterns affect the various protocols in different ways. In particular, the ranking of routing algorithms is influenced by the choice of mobility pattern.", "One of the most important methods for evaluating the characteristics of ad hoc networking protocols is through the use of simulation. Simulation provides researchers with a number of significant benefits, including repeatable scenarios, isolation of parameters, and exploration of a variety of metrics. The topology and movement of the nodes in the simulation are key factors in the performance of the network protocol under study. Once the nodes have been initially distributed, the mobility model dictates the movement of the nodes within the network. Because the mobility of the nodes directly impacts the performance of the protocols, simulation results obtained with unrealistic movement models may not correctly reflect the true performance of the protocols. The majority of existing mobility models for ad hoc networks do not provide realistic movement scenarios; they are limited to random walk models without any obstacles. In this paper, we propose to create more realistic movement models through the incorporation of obstacles. These obstacles are utilized to both restrict node movement as well as wireless transmissions. In addition to the inclusion of obstacles, we construct movement paths using the Voronoi diagram of obstacle vertices. Nodes can then be randomly distributed across the paths, and can use shortest path route computations to destinations at randomly chosen obstacles. Simulation results show that the use of obstacles and pathways has a significant impact on the performance of ad hoc network protocols." ] }
0810.3935
2949423092
Realistic mobility models are fundamental to evaluate the performance of protocols in mobile ad hoc networks. Unfortunately, there are no mobility models that capture the non-homogeneous behaviors in both space and time commonly found in reality, while at the same time being easy to use and analyze. Motivated by this, we propose a time-variant community mobility model, referred to as the TVC model, which realistically captures spatial and temporal correlations. We devise the communities that lead to skewed location visiting preferences, and time periods that allow us to model time dependent behaviors and periodic re-appearances of nodes at specific locations. To demonstrate the power and flexibility of the TVC model, we use it to generate synthetic traces that match the characteristics of a number of qualitatively different mobility traces, including wireless LAN traces, vehicular mobility traces, and human encounter traces. More importantly, we show that, despite the high level of realism achieved, our TVC model is still theoretically tractable. To establish this, we derive a number of important quantities related to protocol performance, such as the average node degree, the hitting time, and the meeting time, and provide examples of how to utilize this theory to guide design decisions in routing protocols.
A different approach to mobility modeling is by empirical mobility trace collection . Along this line, researchers have exploited existing wireless network infrastructure, such as wireless LANs (e.g., @cite_19 @cite_28 @cite_24 ) or cellular phone networks (e.g., @cite_4 ), to track user mobility by monitoring their locations. Such traces can be replayed as input mobility patterns for simulations of network protocols @cite_27 . More recently, DTN-specific testbeds @cite_7 @cite_23 @cite_20 aim at collecting encounter events between mobile nodes instead of the mobility patterns. Some initial efforts to mathematically analyze these traces can be found in @cite_7 @cite_25 . Yet, the size of the traces and the environments in which the experiments are performed can not be adjusted at will by the researchers. To improve the flexibility of traces, the approach of trace-based mobility models have also been proposed @cite_26 @cite_6 @cite_45 . These models discover the underlying mobility rules that lead to the observed properties (such as the duration of stay at locations, the arrival patterns, etc.) in the traces. Statistical analysis is then used to determine proper parameters of the model to match it with the particular trace.
{ "cite_N": [ "@cite_26", "@cite_4", "@cite_7", "@cite_28", "@cite_6", "@cite_24", "@cite_19", "@cite_27", "@cite_45", "@cite_23", "@cite_25", "@cite_20" ], "mid": [ "2127484926", "2143554828", "2112459052", "", "", "", "2160494326", "", "", "", "2145517691", "" ], "abstract": [ "The simulation of mobile networks calls for a mobility model to generate the trajectories of the mobile users (or nodes). It has been shown that the mobility model has a major influence on the behavior of the system. Therefore, using a realistic mobility model is important if we want to increase the confidence that simulations of mobile systems are meaningful in realistic settings. In this paper we present an executable mobility model that uses real-life mobility characteristics to generate mobility scenarios that can be used for network simulations. We present a structured framework for extracting the mobility characteristics from a WLAN trace, for processing the mobility characteristics to determine a parameter set for the mobility model, and for using a parameter set to generate mobility scenarios for simulations. To derive the parameters of the mobility' model, we measure the mobility' characteristics of users of a campus wireless network. Therefore, we call this model the WLAN mobility model Mobility-analysis confirms properties observed by other research groups. The validation shows that the WLAN model maps the real-world mobility' characteristics to the abstract world of network simulators with a very small error. For users that do not have the possibility to capture a WLAN trace, we explore the value space of the WLAN model parameters and show how different parameters sets influence the mobility of the simulated nodes.", "We introduce a system for sensing complex social systems with data collected from 100 mobile phones over the course of 9 months. We demonstrate the ability to use standard Bluetooth-enabled mobile telephones to measure information access and use in different contexts, recognize social patterns in daily user activity, infer relationships, identify socially significant locations, and model organizational rhythms.", "Studying transfer opportunities between wireless devices carried by humans, we observe that the distribution of the inter-contact time, that is the time gap separating two contacts of the same pair of devices, exhibits a heavy tail such as one of a power law, over a large range of value. This observation is confirmed on six distinct experimental data sets. It is at odds with the exponential decay implied by most mobility models. In this paper, we study how this new characteristic of human mobility impacts a class of previously proposed forwarding algorithms. We use a simplified model based on the renewal theory to study how the parameters of the distribution impact the delay performance of these algorithms. We make recommendation for the design of well founded opportunistic forwarding algorithms, in the context of human carried devices.", "", "", "", "Wireless local-area networks are becoming increasingly popular. They are commonplace on university campuses and inside corporations, and they have started to appear in public areas [17]. It is thus becoming increasingly important to understand user mobility patterns and network usage characteristics on wireless networks. Such an understanding would guide the design of applications geared toward mobile environments (e.g., pervasive computing applications), would help improve simulation tools by providing a more representative workload and better user mobility models, and could result in a more effective deployment of wireless network components.Several studies have recently been performed on wire-less university campus networks and public networks. In this paper, we complement previous research by presenting results from a four week trace collected in a large corporate environment. We study user mobility patterns and introduce new metrics to model user mobility. We also analyze user and load distribution across access points. We compare our results with those from previous studies to extract and explain several network usage and mobility characteristics.We find that average user transfer-rates follow a power law. Load is unevenly distributed across access points and is influenced more by which users are present than by the number of users. We model user mobility with persistence and prevalence. Persistence reflects session durations whereas prevalence reflects the frequency with which users visit various locations. We find that the probability distributions of both measures follow power laws.", "", "", "", "We examine the fundamental properties that determine the basic performance metrics for opportunistic communications. We first consider the distribution of inter-contact times between mobile devices. Using a diverse set of measured mobility traces, we find as an invariant property that there is a characteristic time, order of half a day, beyond which the distribution decays exponentially. Up to this value, the distribution in many cases follows a power law, as shown in recent work. This powerlaw finding was previously used to support the hypothesis that inter-contact time has a power law tail, and that common mobility models are not adequate. However, we observe that the time scale of interest for opportunistic forwarding may be of the same order as the characteristic time, and thus the exponential tail is important. We further show that already simple models such as random walk and random way point can exhibit the same dichotomy in the distribution of inter-contact time ascin empirical traces. Finally, we perform an extensive analysis of several properties of human mobility patterns across several dimensions, and we present empirical evidence that the return time of a mobile device to its favorite location site may already explain the observed dichotomy. Our findings suggest that existing results on the performance of forwarding schemes basedon power-law tails might be overly pessimistic.", "" ] }
0810.3935
2949423092
Realistic mobility models are fundamental to evaluate the performance of protocols in mobile ad hoc networks. Unfortunately, there are no mobility models that capture the non-homogeneous behaviors in both space and time commonly found in reality, while at the same time being easy to use and analyze. Motivated by this, we propose a time-variant community mobility model, referred to as the TVC model, which realistically captures spatial and temporal correlations. We devise the communities that lead to skewed location visiting preferences, and time periods that allow us to model time dependent behaviors and periodic re-appearances of nodes at specific locations. To demonstrate the power and flexibility of the TVC model, we use it to generate synthetic traces that match the characteristics of a number of qualitatively different mobility traces, including wireless LAN traces, vehicular mobility traces, and human encounter traces. More importantly, we show that, despite the high level of realism achieved, our TVC model is still theoretically tractable. To establish this, we derive a number of important quantities related to protocol performance, such as the average node degree, the hitting time, and the meeting time, and provide examples of how to utilize this theory to guide design decisions in routing protocols.
As a final note, in @cite_13 , the authors assume the attraction of a community (i.e., a geographical area) to a mobile node is derived from the number of friends of this node currently residing in the community. In our paper we assume that the nodes make movement decisions independently of the others (nonetheless, node sharing the same community will exhibit mobility correlation, capturing the social feature indirectly). Mobility models with inter-node dependency require a solid understanding of the social network structure, which is an important area under development. We plan to work further in this direction in the future.
{ "cite_N": [ "@cite_13" ], "mid": [ "1988651441" ], "abstract": [ "Validation of mobile ad hoc network protocols relies almost exclusively on simulation. The value of the validation is, therefore, highly dependent on how realistic the movement models used in the simulations are. Since there is a very limited number of available real traces in the public domain, synthetic models for movement pattern generation must be used. However, most widely used models are currently very simplistic, their focus being ease of implementation rather than soundness of foundation. As a consequence, simulation results of protocols are often based on randomly generated movement patterns and, therefore, may differ considerably from those that can be obtained by deploying the system in real scenarios. Movement is strongly affected by the needs of humans to socialise or cooperate, in one form or another. Fortunately, humans are known to associate in particular ways that can be mathematically modelled and that have been studied in social sciences for years.In this paper we propose a new mobility model founded on social network theory. The model allows collections of hosts to be grouped together in a way that is based on social relationships among the individuals. This grouping is then mapped to a topographical space, with movements influenced by the strength of social ties that may also change in time. We have validated our model with real traces by showing that the synthetic mobility traces are a very good approximation of human movement patterns." ] }
0810.1554
1967173005
The eigenvalue density for members of the Gaussian orthogonal and unitary ensembles follows the Wigner semicircle law. If the Gaussian entries are all shifted by a constant amount s (2N)1 2, where N is the size of the matrix, in the large N limit a single eigenvalue will separate from the support of the Wigner semicircle provided s>1. In this study, using an asymptotic analysis of the secular equation for the eigenvalue condition, we compare this effect to analogous effects occurring in general variance Wishart matrices and matrices from the shifted mean chiral ensemble. We undertake an analogous comparative study of eigenvalue separation properties when the sizes of the matrices are fixed and s→∞, and higher rank analogs of this setting. This is done using exact expressions for eigenvalue probability densities in terms of generalized hypergeometric functions and using the interpretation of the latter as a Green function in the Dyson Brownian motion model. For the shifted mean Gaussian unitary ensemble an...
In the mathematics literature the problem of the statistical properties of the largest eigenvalue of ) in the case that all entries of @math are equal to @math was first studied by F "uredi and Komlos @cite_40 . In the more general setting of real Wigner matrices (independent entries i.i.d. with mean @math and variance @math ) the distribution of the largest eigenvalue was identified as a Gaussian, so generalizing the result of Lang. Only in recent years did the associated phase transition problem, already known to @cite_17 , receive attention in the mathematical literature. In the case of the GUE, this was due to Pech 'e @cite_12 , while a rigorous study of the GOE case can be found in the work of Maida @cite_18 .
{ "cite_N": [ "@cite_40", "@cite_18", "@cite_12", "@cite_17" ], "mid": [ "2088164510", "2031603583", "2090980101", "1984659056" ], "abstract": [ "LetA=(a ij ) be ann ×n matrix whose entries fori≧j are independent random variables anda ji =a ij . Suppose that everya ij is bounded and for everyi>j we haveEa ij =μ,D 2 a ij =σ2 andEa ii =v.", "We establish a large deviation principle for the largest eigenvalue of a rank one deformation of a matrix from the GUE or GOE. As a corollary, we get another proof of the phenomenon, well-known in learning theory and finance, that the largest eigenvalue separates from the bulk when the perturbation is large enough. A large part of the paper is devoted to an auxiliary result on the continuity of spherical integrals in the case when one of the matrix is of rank one, as studied in one of our previous works.", "We compute the limiting eigenvalue statistics at the edge of the spectrum of large Hermitian random matrices perturbed by the addition of small rank deterministic matrices. We consider random Hermitian matrices with independent Gaussian entries Mij,i≤j with various expectations. We prove that the largest eigenvalue of such random matrices exhibits, in the large N limit, various limiting distributions depending on both the eigenvalues of the matrix Open image in new window and its rank. This rank is also allowed to increase with N in some restricted way.", "A recently published Letter by Kota and Potbhare (1977) obtains the averaged spectrum of a large symmetric random matrix each element of which has a finite mean: their results disagree with two recent calculations which predict that under certain circumstances a single isolated eigenvalue splits off from the continuous semicircular distribution of eigenvalues associated with the random part of the matrix. This letter offers a simple re-derivation of this result and corrects the error in the work of Kota and Potbhare." ] }
0810.1554
1967173005
The eigenvalue density for members of the Gaussian orthogonal and unitary ensembles follows the Wigner semicircle law. If the Gaussian entries are all shifted by a constant amount s (2N)1 2, where N is the size of the matrix, in the large N limit a single eigenvalue will separate from the support of the Wigner semicircle provided s>1. In this study, using an asymptotic analysis of the secular equation for the eigenvalue condition, we compare this effect to analogous effects occurring in general variance Wishart matrices and matrices from the shifted mean chiral ensemble. We undertake an analogous comparative study of eigenvalue separation properties when the sizes of the matrices are fixed and s→∞, and higher rank analogs of this setting. This is done using exact expressions for eigenvalue probability densities in terms of generalized hypergeometric functions and using the interpretation of the latter as a Green function in the Dyson Brownian motion model. For the shifted mean Gaussian unitary ensemble an...
The paper @cite_19 by Ben Arous, Baik and Pech 'e proved the phase transition property relating to ) in the complex case with @math given by @math () corresponds to @math ). Subsequent studies by Baik and Silverstein @cite_31 , Paul @cite_7 and Bai and Yao @cite_15 considered the real case. Significant for the present study is the result of @cite_15 , giving that for ) with @math given as above, the separated eigenvalues have the law of the @math GUE. The case @math ---not considered here---corresponding to self dual quaternion real matrices, is studied in the recent work of Wang @cite_39 .
{ "cite_N": [ "@cite_7", "@cite_15", "@cite_39", "@cite_19", "@cite_31" ], "mid": [ "", "2042283504", "1551952938", "2066459155", "2106084579" ], "abstract": [ "", "In a spiked population model, the population covariance matrix has all its eigenvalues equal to units except for a few fixed eigenvalues (spikes). This model is proposed by Johnstone to cope with empirical findings on various data sets. The question is to quantify the effect of the perturbation caused by the spike eigenvalues. A recent work by Baik and Silverstein establishes the almost sure limits of the extreme sample eigenvalues associated to the spike eigenvalues when the population and the sample sizes become large. This paper establishes the limiting distributions of these extreme sample eigenvalues. As another important result of the paper, we provide a central limit theorem on random sesquilinear forms.", "The spiked model is an important special case of the Wishart ensemble, and a natural generalization of the white Wishart ensemble. Mathematically, it can be defined on three kinds of variables: the real, the complex and the quaternion. For practical application, we are interested in the limiting distribution of the largest sample eigenvalue. We first give a new proof of the result of Baik, Ben Arous and P ' e ch ' e for the complex spiked model, based on the method of multiple orthogonal polynomials by Bleher and Kuijlaars. Then in the same spirit we present a new result of the rank 1 quaternionic spiked model, proven by combinatorial identities involving quaternionic Zonal polynomials ( = 1 2 Jack polynomials) and skew orthogonal polynomials. We find a phase transition phenomenon for the limiting distribution in the rank 1 quaternionic spiked model as the spiked population eigenvalue increases, and recognize the seemingly new limiting distribution on the critical point as the limiting distribution of the largest sample eigenvalue in the real white Wishart ensemble. Finally we give conjectures for higher rank quaternionic spiked model and the real spiked model.", "AbstractWe compute the limiting distributions of the largest eigenvalue of a complex Gaussian samplecovariance matrix when both the number of samples and the number of variables in each samplebecome large. When all but finitely many, say r, eigenvalues of the covariance matrix arethe same, the dependence of the limiting distribution of the largest eigenvalue of the samplecovariance matrix on those distinguished r eigenvalues of the covariance matrix is completelycharacterized in terms of an infinite sequence of new distribution functions that generalizethe Tracy-Widom distributions of the random matrix theory. Especially a phase transitionphenomena is observed. Our results also apply to a last passage percolation model and aqueuing model. 1 Introduction Consider M independent, identically distributed samples y 1 ,..., y M , all of which are N ×1 columnvectors. We further assume that the sample vectors y k are Gaussian with mean µ and covarianceΣ, where Σ is a fixed N ×N positive matrix; the density of a sample y isp( y) =1(2π)", "We consider a spiked population model, proposed by Johnstone, in which all the population eigenvalues are one except for a few fixed eigenvalues. The question is to determine how the sample eigenvalues depend on the non-unit population ones when both sample size and population size become large. This paper completely determines the almost sure limits of the sample eigenvalues in a spiked model for a general class of samples." ] }
0810.1554
1967173005
The eigenvalue density for members of the Gaussian orthogonal and unitary ensembles follows the Wigner semicircle law. If the Gaussian entries are all shifted by a constant amount s (2N)1 2, where N is the size of the matrix, in the large N limit a single eigenvalue will separate from the support of the Wigner semicircle provided s>1. In this study, using an asymptotic analysis of the secular equation for the eigenvalue condition, we compare this effect to analogous effects occurring in general variance Wishart matrices and matrices from the shifted mean chiral ensemble. We undertake an analogous comparative study of eigenvalue separation properties when the sizes of the matrices are fixed and s→∞, and higher rank analogs of this setting. This is done using exact expressions for eigenvalue probability densities in terms of generalized hypergeometric functions and using the interpretation of the latter as a Green function in the Dyson Brownian motion model. For the shifted mean Gaussian unitary ensemble an...
The eigenvalue probability density function ) is closely related to the Dyson Brownian motion model @cite_1 in random matrix theory (see Section 3 below). It is also referred to as a Gaussian ensemble with a source. In this context the case of @math having a finite rank has been studied by a number of authors @cite_26 @cite_21 @cite_10 @cite_35 . However our use of this differs in that we will keep @math fixed, and exhibit phase separation as a function of the perturbing parameter.
{ "cite_N": [ "@cite_35", "@cite_26", "@cite_21", "@cite_1", "@cite_10" ], "mid": [ "2952479223", "2097968528", "1498064444", "1988076732", "2949688822" ], "abstract": [ "Consider n non-intersecting particles on the real line (Dyson Brownian motions), all starting from the origin at time=0, and forced to return to x=0 at time=1. For large n, the average mean density of particles has its support, for each 0<t<1, within the interior of an ellipse. The Airy process is defined as the motion of these non-intersecting Brownian motions for large n, but viewed from an arbitrary point on the ellipse with an appropriate space-time rescaling. Assume now a finite number r of these particles are forced to a different target point. Does it affect the Brownian fluctuations along the ellipse for large n? In this paper, we show that no new process appears as long as one considers points on the ellipse, for which the t-coordinate is smaller than the t-coordinate of the point of tangency of the tangent to the curve passing through the target point. At this point of tangency the fluctuations obey a new statistics: the Airy process with r outliers (in short: r-Airy process ). The log of the transition probability of this new process is given by the Fredholm determinant of a new kernel (extending the Airy kernel) and it satisfies a non-linear PDE in x and the time.", "We continue the study of the Hermitian random matrix ensemble with external source Open image in new window where A has two distinct eigenvalues ±a of equal multiplicity. This model exhibits a phase transition for the value a=1, since the eigenvalues of M accumulate on two intervals for a>1, and on one interval for 0 1 was treated in Part I, where it was proved that local eigenvalue correlations have the universal limiting behavior which is known for unitarily invariant random matrices, that is, limiting eigenvalue correlations are expressed in terms of the sine kernel in the bulk of the spectrum, and in terms of the Airy kernel at the edge. In this paper we establish the same results for the case 0<a<1. As in Part I we apply the Deift Zhou steepest descent analysis to a 3×3-matrix Riemann-Hilbert problem. Due to the different structure of an underlying Riemann surface, the analysis includes an additional step involving a global opening of lenses, which is a new phenomenon in the steepest descent analysis of Riemann-Hilbert problems.", "We present a random matrix interpretation of the distribution functions which have appeared in the study of the one-dimensional polynuclear growth (PNG) model with external sources. It is shown that the distribution, GOE @math , which is defined as the square of the GOE Tracy-Widom distribution, can be obtained as the scaled largest eigenvalue distribution of a special case of a random matrix model with a deterministic source, which have been studied in a different context previously. Compared to the original interpretation of the GOE @math as the square of GOE'', ours has an advantage that it can also describe the transition from the GUE Tracy-Widom distribution to the GOE @math . We further demonstrate that our random matrix interpretation can be obtained naturally by noting the similarity of the topology between a certain non-colliding Brownian motion model and the multi-layer PNG model with an external source. This provides us with a multi-matrix model interpretation of the multi-point height distributions of the PNG model with an external source.", "A new type of Coulomb gas is defined, consisting of n point charges executing Brownian motions under the influence of their mutual electrostatic repulsions. It is proved that this gas gives an exact mathematical description of the behavior of the eigenvalues of an (n × n) Hermitian matrix, when the elements of the matrix execute independent Brownian motions without mutual interaction. By a suitable choice of initial conditions, the Brownian motion leads to an ensemble of random matrices which is a good statistical model for the Hamiltonian of a complex system possessing approximate conservation laws. The development with time of the Coulomb gas represents the statistical behavior of the eigenvalues of a complex system as the strength of conservation‐destroying interactions is gradually increased. A virial theorem'' is proved for the Brownian‐motion gas, and various properties of the stationary Coulomb gas are deduced as corollaries.", "We describe the spectral statistics of the first finite number of eigenvalues in a newly-forming band on the hard-edge of the spectrum of a random Hermitean matrix model. It is found that in a suitable scaling regime, they are described by the same spectral statistics of a finite-size Laguerre-type matrix model. The method is rigorously based on the Riemann-Hilbert analysis of the corresponding orthogonal polynomials." ] }
0810.0139
2952484550
Most research related to unithood were conducted as part of a larger effort for the determination of termhood. Consequently, novelties are rare in this small sub-field of term extraction. In addition, existing work were mostly empirically motivated and derived. We propose a new probabilistically-derived measure, independent of any influences of termhood, that provides dedicated measures to gather linguistic evidence from parsed text and statistical evidence from Google search engine for the measurement of unithood. Our comparative study using 1,825 test cases against an existing empirically-derived function revealed an improvement in terms of precision, recall and accuracy.
@cite_3 proposed a measure known as for extracting complex terms. The measure is based upon the claim that a substring of a term candidate is a candidate itself given that it demonstrates adequate independence from the longer version it appears in. For example, , and are acceptable as valid complex term candidates. However, is not. Therefore, some measures are required to gauge the strength of word combinations to decide whether two word sequences should be merged or not. Given a word sequence @math to be examined for unithood, the is defined as: where @math is the number of words in @math , @math is the set of longer term candidates that contain @math , @math is the longest n-gram considered, @math is the frequency of occurrence of @math , and @math . While certain researchers @cite_0 consider as a termhood measure, others @cite_5 accept it as a measure for unithood. One can observe that longer candidates tend to gain higher weights due to the inclusion of @math in Equation . In addition, the weights computed using Equation are purely dependent on the frequency of @math .
{ "cite_N": [ "@cite_0", "@cite_5", "@cite_3" ], "mid": [ "", "2046045616", "2163085258" ], "abstract": [ "", "In this paper, we propose a new idea for the automatic recognition of domain specific terms. Our idea is based on the statistics between a compound noun and its component single-nouns. More precisely, we focus basically on how many nouns adjoin the noun in question to form compound nouns. We propose several scoring methods based on this idea and experimentally evaluate them on the NTCIRI TMREC test collection. The results are very promising especially in the low recall area.", "The information used for the extraction of terms can be considered as rather 'internal', i.e. coming from the candidate string itself. This paper presents the incorporation of 'external' information derived from the context of the candidate string. It is embedded to the C-value approach for automatic term recognition (ATR), in the form of weights constructed from statistical characteristics of the context words of the candidate string." ] }
0809.5008
2102473388
The benefit of multi-antenna receivers is investigated in wireless ad hoc networks, and the main finding is that network throughput can be made to scale linearly with the number of receive antennas N_r even if each transmitting node uses only a single antenna. This is in contrast to a large body of prior work in single-user, multiuser, and ad hoc wireless networks that have shown linear scaling is achievable when multiple receive and transmit antennas (i.e., MIMO transmission) are employed, but that throughput increases logarithmically or sublinearly with N_r when only a single transmit antenna (i.e., SIMO transmission) is used. The linear gain is achieved by using the receive degrees of freedom to simultaneously suppress interference and increase the power of the desired signal, and exploiting the subsequent performance benefit to increase the density of simultaneous transmissions instead of the transmission rate. This result is proven in the transmission capacity framework, which presumes single-hop transmissions in the presence of randomly located interferers, but it is also illustrated that the result holds under several relaxations of the model, including imperfect channel knowledge, multihop transmission, and regular networks (i.e., interferers are deterministically located on a grid).
Early work on characterizing the throughput gains from MIMO in ad hoc networks includes @cite_11 @cite_15 @cite_19 @cite_12 although these generally primarily employed simulations, while more recently @cite_9 @cite_7 @cite_18 used tools similar to those used in the paper and developed by the present authors. However, none of these works have characterized the maximum throughput gains achievable with receiver processing only.
{ "cite_N": [ "@cite_18", "@cite_7", "@cite_9", "@cite_19", "@cite_15", "@cite_12", "@cite_11" ], "mid": [ "2153850096", "2081156871", "2051577697", "1509281150", "1989460852", "2123864656", "2096469168" ], "abstract": [ "This paper investigates the performance of spatial diversity techniques in dense ad hoc networks. We derive analytical expressions for the contention density in systems employing MIMO-MRC or OSTBC. In the case of MIMO- MRC the expressions are based on new expansions for the SIR distribution in the high interference regime typical in dense networks. Our results are confirmed through comparison with Monte Carlo simulations.", "In ad hoc networks of nodes equipped with multiple antennas, the tradeoff between spatial multiplexing and diversity gains in each link impacts the overall network capacity. An optimal algorithm is developed for adaptive rate and power control for a communication link over multiple channels in a Poisson field of interferers. The algorithm and its analysis demonstrate that optimum area spectral efficiency is achieved when each communication link in a large distributed wireless network properly balances between diversity and multiplexing techniques. The channel adaptive algorithm is shown to be superior to traditional and static multi-antenna architectures, as well as to certain channel adaptive strategies previously proposed. Lastly, the adaptive rate control algorithm is coupled with an optimum frequency hopping scheme to achieve the maximum area spectral efficiency.", "We study in this paper the network spectral efficiency of a multiple-input multiple-output (MIMO) ad hoc network with K simultaneous communicating transmitter-receiver pairs. Assuming that each transmitter is equipped with t antennas and each receiver with r antennas and each receiver implements single-user detection, we show that in the absence of channel state information (CSI) at the transmitters, the asymptotic network spectral efficiency is limited by r nats s Hz as Krarrinfin and is independent of t and the transmit power. With CSI corresponding to the intended receiver available at the transmitter, we demonstrate that the asymptotic spectral efficiency is at least t+r+2radictr nats s Hz. Asymptotically optimum signaling is also derived under the same CSI assumption, i.e., each transmitter knows the channel corresponding to its desired receiver only. Further capacity improvement is possible with stronger CSI assumption; we demonstrate this using a heuristic interference suppression transmit beamforming approach. The conventional orthogonal transmission approach is also analyzed. In particular, we show that with idealized medium access control, the channelized transmission has unbounded asymptotic spectral efficiency under the constant per-user power constraint. The impact of different power constraints on the asymptotic spectral efficiency is also carefully examined. Finally, numerical examples are given that confirm our analysis", "We study the throughput limits of a MIMO (multiple-input multiple output) ad hoc network with K simultaneous communicating transceiver pairs. Assume that each transmitter is equipped with t antennas and the receivers with r antennas, we show that in the absence of channel state information (CSI) at the transmitters, the asymptotic network throughput is limited by r nats s Hz as K spl rarr spl infin . With CSI corresponding to the desired receiver available at the transmitter, we demonstrate that an asymptotic throughput of t+r+2 spl radic tr nats s Hz can be achieved using a simple beamforming approach. Further, we show that the asymptotically optimal transmission scheme with CSI amounts to a single-user waterfilling for a properly scaled channel.", "Beamforming antennas have the potential to provide a fundamental breakthrough in ad hoc network capacity. We present a broad-based examination of this potential, focusing on exploiting the longer ranges as well as the reduced interference that beamforming antennas can provide. We consider a number of enhancements to a convectional ad hoc network system, and evaluation the impact of each enhancement using simulation. Such enhancements include \"aggressive\" and \"conservative\" channel access models for beamforming antennas, link power control, and directional neighbor discovery. Our simulations are based on detailed modeling on detailed modeling of steered as well as swiched beams using antenna patterns of varying gains, and a realistic radio and propagation model. For the scenarios studied, our results show that beamforming can yield a 28 to 118 (depending upon the density) improvement in throughput, and up to a factor-of-28 reduction in delay. Our study also tells us which mechanisms are likely to be more effective and under what conditions, which in turn identifies areas where future research is neede", "In this paper, the rate regions are studied for MIMO ad hoc networks. We first apply a framework developed for single antenna systems to MIMO systems, which gives the ultimate capacity region. Motivated by the fact that the ultimate capacity regions allow an optimization that may he unrealistic in some networks, a new concept of average rate region is proposed. We show the large gap between the ultimate capacity region and the new average rate region, while the latter is an upper bound on the performance of many existing ad hoc routing protocols. On the other hand, the average rate region also gives the average system performance over fading or random node positions. The outage capacity region is also defined. Through the study of the different rate regions, we show that the gain from multiple antennas for networks is similar to that for point-to-point communications. The gains obtained from multi-hop routing and spatial reuse are also shown for MIMO networks.", "Directional antennas offer tremendous potential for improving the performance of ad hoc networks. Harnessing this potential, however, requires new mechanisms at the medium access and network layers for intelligently and adaptively exploiting the antenna system. While recent years have seen a surge of research into such mechanisms, the problem of developing a complete ad hoc networking system, including the unique challenge of real-life prototype development and experimentation has not been addressed. In this paper, we present utilizing directional antennas for ad hoc networking (UDAAN). UDAAN is an interacting suite of modular network- and medium access control (MAC)-layer mechanisms for adaptive control of steered or switched antenna systems in an ad hoc network. UDAAN consists of several new mechanisms-a directional power-controlled MAC, neighbor discovery with beamforming, link characterization for directional antennas, proactive routing and forwarding-all working cohesively to provide the first complete systems solution. We also describe the development of a real-life ad hoc network testbed using UDAAN with switched directional antennas, and we discuss the lessons learned during field trials. High fidelity simulation results, using the same networking code as in the prototype, are also presented both for a specific scenario and using random mobility models. For the range of parameters studied, our results show that UDAAN can produce a very significant improvement in throughput over omnidirectional communications." ] }
0809.1552
2078543558
We provide a computer-verified exact monadic functional implementation of the Riemann integral in type theory. Together with previous work by O'Connor, this may be seen as the beginning of the realization of Bishop's vision to use constructive mathematics as a programming language for exact analysis.
There is some potential to increase efficiency by using a non-uniform'' definition of uniform continuity. That is to say, using a definition of uniform continuity that allows different segments of the domain to have local moduli associated with them. Ulrich Berger uses such a definition of uniform continuity to define integration @cite_12 . Simpson also defines an integration algorithm that uses a local modulus for a function that is computed directly from the definition of the function @cite_10 . However, implementing his algorithm directly in Coq is not possible because it relies on bar induction, which is not available in Coq unless one adds an axiom such as bar induction to it or one treats the real numbers as a formal space @cite_36 @cite_16 .
{ "cite_N": [ "@cite_36", "@cite_16", "@cite_10", "@cite_12" ], "mid": [ "1486536800", "2111094344", "1515039666", "2108845204" ], "abstract": [ "The notion of formal space was introduced by Fourman and Grayson [FG] only a few years ago, but it is only a recent though important step of a long story whose roots involve such names as Brouwer and Stone and whose development is due to mathematicians from different fields, mainly algebraic geometry, category theory and logic.", "Cauchy’s construction of reals as sequences of rational approximations is the theoretical basis for a number of implementations of exact real numbers, while Dedekind’s construction of reals as cuts has inspired fewer useful computational ideas. Nevertheless, we can see the computational content of Dedekind reals by constructing them within Abstract Stone Duality (ASD), a computationally meaningful calculus for topology. This provides the theoretical background for a novel way of computing with real numbers in the style of logic programming. Real numbers are dened in terms of (lower and upper) Dedekind cuts, while programs are expressed as statements about real numbers in the language of ASD. By adapting Newton’s method to interval arithmetic we can make the computations as ecient as those based on Cauchy reals. The results reported in this talk are joint work with Paul Taylor.", "We show how functional languages can be used to write programs for real-valued functionals in exact real arithmetic. We concentrate on two useful functionals: definite integration, and the functional returning the maximum value of a continuous function over a closed interval. The algorithms are a practical application of a method, due to Berger, for computing quantifiers over streams. Correctness proofs for the algorithms make essential use of domain theory.", "We give a coinductive characterization of the set of continuous functions defined on a compact real interval, and extract certified programs that construct and combine exact real number algorithms with respect to the binary signed digit representation of real numbers. The data type corresponding to the coinductive definition of continuous functions consists of finitely branching non-wellfounded trees describing when the algorithm writes and reads digits. This is a pilot study in using proof-theoretic methods for certified algorithms in exact real arithmetic." ] }
0809.1552
2078543558
We provide a computer-verified exact monadic functional implementation of the Riemann integral in type theory. Together with previous work by O'Connor, this may be seen as the beginning of the realization of Bishop's vision to use constructive mathematics as a programming language for exact analysis.
The constructive real numbers have already been used to provide a semi-decision procedure for inequalities of real numbers. Not only for the constructive real numbers, but also for the non-computational real numbers in the Coq standard library @cite_37 . The same technique can be applied here.
{ "cite_N": [ "@cite_37" ], "mid": [ "1646312512" ], "abstract": [ "There are two incompatible Coq libraries that have a theory of the real numbers; the Coq standard library gives an axiomatic treatment of classical real numbers, while the CoRN library from Nijmegen defines constructively valid real numbers. Unfortunately, this means results about one structure cannot easily be used in the other structure. We present a way interfacing these two libraries by showing that their real number structures are isomorphic assuming the classical axioms already present in the standard library reals. This allows us to use O'Connor's decision procedure for solving ground inequalities present in CoRN to solve inequalities about the reals from the Coq standard library, and it allows theorems from the Coq standard library to apply to problem about the CoRN reals." ] }
0809.1552
2078543558
We provide a computer-verified exact monadic functional implementation of the Riemann integral in type theory. Together with previous work by O'Connor, this may be seen as the beginning of the realization of Bishop's vision to use constructive mathematics as a programming language for exact analysis.
Previously, the CoRN project @cite_21 showed that the formalization of constructive analysis in a type theory is feasible. However, the extraction of programs from such developments is difficult @cite_19 . On the contrary, in the present article we have shown that if one takes an algorithmic attitude from the start it is possible to obtain feasible programs.
{ "cite_N": [ "@cite_19", "@cite_21" ], "mid": [ "2096282101", "1498091397" ], "abstract": [ "It is well known that mathematical proofs often contain (abstract) algorithms, but although these algorithms can be understood by a human, it still takes a lot of time and effort to implement these algorithms on a computer; moreover, one runs the risk of making mistakes in the process.", "We present C-CoRN, the Constructive Coq Repository at Nijmegen. It consists of a mathematical library of constructive algebra and analysis formalized in the theorem prover Coq. We explain the structure and the contents of the library and we discuss the motivation and some (possible) applications of such a library." ] }
0809.1802
1939790513
Most search engines index the textual content of documents in digital libraries. However, scholarly articles frequently report important findings in figures for visual impact and the contents of these figures are not indexed. These contents are often invaluable to the researcher in various fields, for the purposes of direct comparison with their own work. Therefore, searching for figures and extracting figure data are important problems. To the best of our knowledge, there exists no tool to automatically extract data from figures in digital documents. If we can extract data from these images automatically and store them in a database, an end-user can query and combine data from multiple digital documents simultaneously and efficiently. We propose a framework based on image analysis and machine learning to extract information from 2-D plot images and store them in a database. The proposed algorithm identifies a 2-D plot and extracts the axis labels, legend and the data points from the 2-D plot. We also segregate overlapping shapes that correspond to different data points. We demonstrate performance of individual algorithms, using a combination of generated and real-life images.
The image categorization portion of our work bears a similarity to image understanding, however, we focus on deciding whether a given image contains a 2-D plot. Li et.al. @cite_2 developed wavelet transform, context sensitive algorithms to perform texture based analysis of an image, in separating camera taken pictures from non-pictures. Building on this framework, Lu et.al. @cite_7 developed an automatic categorization image system for digital library documents which categorizes the images into multiple classes within non-picture class e.g. diagram, 2-D figures, 3-D figures, diagrams and other. We find significant improvements in detecting 2-D figures by substituting certain features used in @cite_7 . @cite_1 presents image-processing-based techniques to extract the data represented by lines in 2-D plots. However, @cite_1 does not extract the data represented by data points and treats the data point shapes as noise while processing the image. Our work is complimentary in that we address the question of how to extract data represented by various shapes.
{ "cite_N": [ "@cite_1", "@cite_7", "@cite_2" ], "mid": [ "2124894569", "2159428917", "2113475061" ], "abstract": [ "Two-dimensional (2-D) plots in digital documents contain important information. Often, the results of scientific experiments and performance of businesses are summarized using plots. Although 2-D plots are easily understood by human users, current search engines rarely utilize the information contained in the plots to enhance the results returned in response to queries posed by end- users. We propose an automated algorithm for extracting information from line curves in 2-D plots. The extracted information can be stored in a database and indexed to answer end-user queries and enhance search results. We have collected 2-D plot images from a variety of resources and tested our extraction algorithms. Experimental evaluation has demonstrated that our method can produce results suitable for real world use.", "Figures are very important non-textual information contained in scientific documents. Current digital libraries do not provide users tools to retrieve documents based on the information available within the figures. We propose an architecture for retrieving documents by integrating figures and other information. The initial step in enabling integrated document search is to categorize figures into a set of pre-defined types. We propose several categories of figures based on their functionalities in scholarly articles. We have developed a machine-learning-based approach for automatic categorization of figures. Both global features, such as texture, and part features, such as lines, are utilized in the architecture for discriminating among figure categories. The proposed approach has been evaluated on a testbed document set collected from the CiteSeer scientific literature digital library. Experimental evaluation has demonstrated that our algorithms can produce acceptable results for realworld use. Our tools will be integrated into a scientific document digital library.", "In this paper, an algorithm is developed for segmenting document images into four classes: background, photograph, text, and graph. Features used for classification are based on the distribution patterns of wavelet coefficients in high frequency bands. Two important attributes of the algorithm are its multiscale nature-it classifies an image at different resolutions adaptively, enabling accurate classification at class boundaries as well as fast classification overall-and its use of accumulated context information for improving classification accuracy." ] }
0809.2059
2155584001
We prove the existence and uniqueness, for wave speeds sufficiently large, of monotone traveling wave solutions connecting stable to unstable spatial equilibria for a class of @math -dimensional lattice differential equations with unidirectional coupling. This class of lattice equations includes some spatial discretizations for hyperbolic conservation laws with a source term as well as a subclass of monotone systems. We obtain a variational characterization of the critical wave speed above which monotone traveling wave solutions are guaranteed to exist. We also discuss non-monotone waves, and the coexistence of monotone and non-monotone waves.
In @cite_26 , Weinberger considers difference equations of the form [ u(t+1, ) = Q(u(t, )). ] Here the map @math acts on functions @math which themselves map a spatial domain to the positive reals. The spatial domain is @math -dimensional and either continuous or discrete. If condition holds, time- @math maps for fit into the framework of @cite_26 ; in this case, the existence of decreasing fronts for when @math is a consequence of results in @cite_26 . The main focus in @cite_26 is on the so-called asymptotic spreading speed of initial data supported on a compact set; the uniqueness of monotone fronts, and the existence and behavior of non-monotone fronts, are not directly addressed.
{ "cite_N": [ "@cite_26" ], "mid": [ "1969504525" ], "abstract": [ "It is shown that many of the asymptotic properties of the Fisher model for population genetics and population ecology can also be derived for a class of models in which time is discrete and space may or may not be discrete. This allows one to discuss the behavior of models in which the data consist of occasional counts on survey tracts, as well as that of computer models." ] }
0809.2059
2155584001
We prove the existence and uniqueness, for wave speeds sufficiently large, of monotone traveling wave solutions connecting stable to unstable spatial equilibria for a class of @math -dimensional lattice differential equations with unidirectional coupling. This class of lattice equations includes some spatial discretizations for hyperbolic conservation laws with a source term as well as a subclass of monotone systems. We obtain a variational characterization of the critical wave speed above which monotone traveling wave solutions are guaranteed to exist. We also discuss non-monotone waves, and the coexistence of monotone and non-monotone waves.
The existence results in @cite_26 use a monotone iteration technique. The same is true of @cite_19 (described further below), where a monotonicity condition on @math is also imposed. Recently such techniques have been extended, in the setting of lattice integro-difference equations, to the case where the nonlinearity is not necessarily monotone but satisfies conditions which are similar to our (G1). In particular @cite_17 and @cite_8 both obtain the existence of (not necessarily monotone) traveling waves as well as a variational characterization of the minimum wave speed guaranteeing monotone fronts. Although our setting and techniques differ, our results here can be regarded as complementing these latter works.
{ "cite_N": [ "@cite_19", "@cite_26", "@cite_8", "@cite_17" ], "mid": [ "2071564573", "1969504525", "", "2086141181" ], "abstract": [ "Abstract This work proves the existence and multiplicity results of monotonic traveling wave solutions for some lattice differential equations by using the monotone iteration method. Our results include the model of cellular neural networks (CNN). In addition to the monotonic traveling wave solutions, non-monotonic and oscillating traveling wave solutions in the delay type of CNN are also obtained.", "It is shown that many of the asymptotic properties of the Fisher model for population genetics and population ecology can also be derived for a class of models in which time is discrete and space may or may not be discrete. This allows one to discuss the behavior of models in which the data consist of occasional counts on survey tracts, as well as that of computer models.", "", "A class of integral recursion models for the growth and spread of a synchronized single-species population is studied. It is well known that if there is no overcompensation in the fecundity function, the recursion has an asymptotic spreading speed c*, and that this speed can be characterized as the speed of the slowest non-constant traveling wave solution. A class of integral recursions with overcompensation which still have asymptotic spreading speeds can be found by using the ideas introduced by Thieme (J Reine Angew Math 306:94–121, 1979) for the study of space-time integral equation models for epidemics. The present work gives a large subclass of these models with overcompensation for which the spreading speed can still be characterized as the slowest speed of a non-constant traveling wave. To illustrate our results, we numerically simulate a series of traveling waves. The simulations indicate that, depending on the properties of the fecundity function, the tails of the waves may approach the carrying capacity monotonically, may approach the carrying capacity in an oscillatory manner, or may oscillate continually about the carrying capacity, with its values bounded above and below by computable positive numbers." ] }
0809.2059
2155584001
We prove the existence and uniqueness, for wave speeds sufficiently large, of monotone traveling wave solutions connecting stable to unstable spatial equilibria for a class of @math -dimensional lattice differential equations with unidirectional coupling. This class of lattice equations includes some spatial discretizations for hyperbolic conservation laws with a source term as well as a subclass of monotone systems. We obtain a variational characterization of the critical wave speed above which monotone traveling wave solutions are guaranteed to exist. We also discuss non-monotone waves, and the coexistence of monotone and non-monotone waves.
In @cite_19 Hsu and Lin consider lattice equations of the form (two-dimensional equations, and equations where the coupling is not unidirectional, are also considered in @cite_19 ). The chief motivation for @cite_19 lies in the case that @math is piecewise linear; in this case becomes a so-called (CNN). Cellular neural networks were first introduced in @cite_31 to model the behavior of a large array of coupled electronic components.
{ "cite_N": [ "@cite_19", "@cite_31" ], "mid": [ "2071564573", "2056717940" ], "abstract": [ "Abstract This work proves the existence and multiplicity results of monotonic traveling wave solutions for some lattice differential equations by using the monotone iteration method. Our results include the model of cellular neural networks (CNN). In addition to the monotonic traveling wave solutions, non-monotonic and oscillating traveling wave solutions in the delay type of CNN are also obtained.", "The theory of a novel class of information-processing systems, called cellular neural networks, which are capable of high-speed parallel signal processing, was presented in a previous paper (see ibid., vol.35, no.10, p.1257-72, 1988). A dynamic route approach for analyzing the local dynamics of this class of neural circuits is used to steer the system trajectories into various stable equilibrium configurations which map onto binary patterns to be recognized. Some applications of cellular neural networks to such areas as image processing and pattern recognition are demonstrated, albeit with only a crude circuit. In particular, examples of cellular neural networks which can be designed to recognize the key features of Chinese characters are presented. >" ] }
0809.2059
2155584001
We prove the existence and uniqueness, for wave speeds sufficiently large, of monotone traveling wave solutions connecting stable to unstable spatial equilibria for a class of @math -dimensional lattice differential equations with unidirectional coupling. This class of lattice equations includes some spatial discretizations for hyperbolic conservation laws with a source term as well as a subclass of monotone systems. We obtain a variational characterization of the critical wave speed above which monotone traveling wave solutions are guaranteed to exist. We also discuss non-monotone waves, and the coexistence of monotone and non-monotone waves.
The existence problem in @cite_19 is formulated in terms of increasing traveling waves of negative speed; a change of variables is required to convert to our setting. In terms of , the main hypotheses in @cite_19 are @math ; @math . @math for @math and @math . The first and third conditions above are analogous to our (G1.2) and (G1.3). The second condition above is stronger; it should be thought of as analogous to and allows for the application of monotone iteration techniques. Under these conditions, there is some @math such that a decreasing front exists for all @math (Theorem 1.1 in @cite_19 , reformulated for our setting). In @cite_19 and the companion paper @cite_28 the authors also consider with a piecewise linear @math for which the monotonicity condition above fails, but which is simple enough to admit detailed analysis. In results analogous to ours, the authors describe condititions under which, as @math drops below a critical level, monotone fronts give way to non-monotone fronts that overshoot" and oscillate about their limit at @math .
{ "cite_N": [ "@cite_28", "@cite_19" ], "mid": [ "2046114643", "2071564573" ], "abstract": [ "In this paper, we study the structure of traveling wave solutions of Cellular Neural Networks of the advanced type. We show the existence of monotone traveling wave, oscillating wave and eventually periodic wave solutions by using shooting method and comparison principle. In addition, we obtain the existence of periodic wave train solutions.", "Abstract This work proves the existence and multiplicity results of monotonic traveling wave solutions for some lattice differential equations by using the monotone iteration method. Our results include the model of cellular neural networks (CNN). In addition to the monotonic traveling wave solutions, non-monotonic and oscillating traveling wave solutions in the delay type of CNN are also obtained." ] }
0809.2554
1529111168
We study local search algorithms for metric instances of facility location problems: the uncapacitated facility location problem (UFL), as well as uncapacitated versions of the @math -median, @math -center and @math -means problems. All these problems admit natural local search heuristics: for example, in the UFL problem the natural moves are to open a new facility, close an existing facility, and to swap a closed facility for an open one; in @math -medians, we are allowed only swap moves. The local-search algorithm for @math -median was analyzed by (SIAM J. Comput. 33(3):544-562, 2004), who used a clever coupling'' argument to show that local optima had cost at most constant times the global optimum. They also used this argument to show that the local search algorithm for UFL was 3-approximation; their techniques have since been applied to other facility location problems. In this paper, we give a proof of the @math -median result which avoids this coupling argument. These arguments can be used in other settings where the arguments have been used. We also show that for the problem of opening @math facilities @math to minimize the objective function @math , the natural swap-based local-search algorithm is a @math -approximation. This implies constant-factor approximations for @math -medians (when @math ), and @math -means (when @math ), and an @math -approximation algorithm for the @math -center problem (which is essentially @math ).
Facility location problems have had a long history; here, we mention only some of the results for these problems. We focus on metric instances of these problems: the non-metric cases are usually much harder @cite_15 .
{ "cite_N": [ "@cite_15" ], "mid": [ "2046554800" ], "abstract": [ "We describe in this paper polynomial heuristics for three important hard problems--the discrete fixed cost median problem (the plant location problem), the continuous fixed cost median problem in a Euclidean space, and the network fixed cost median problem with convex costs. The heuristics for all the three problems guarantee error ratios no worse than the logarithm of the number of customer points. The derivation of the heuristics is based on the presentation of all types of median problems discussed as a set covering problem." ] }
0809.2554
1529111168
We study local search algorithms for metric instances of facility location problems: the uncapacitated facility location problem (UFL), as well as uncapacitated versions of the @math -median, @math -center and @math -means problems. All these problems admit natural local search heuristics: for example, in the UFL problem the natural moves are to open a new facility, close an existing facility, and to swap a closed facility for an open one; in @math -medians, we are allowed only swap moves. The local-search algorithm for @math -median was analyzed by (SIAM J. Comput. 33(3):544-562, 2004), who used a clever coupling'' argument to show that local optima had cost at most constant times the global optimum. They also used this argument to show that the local search algorithm for UFL was 3-approximation; their techniques have since been applied to other facility location problems. In this paper, we give a proof of the @math -median result which avoids this coupling argument. These arguments can be used in other settings where the arguments have been used. We also show that for the problem of opening @math facilities @math to minimize the objective function @math , the natural swap-based local-search algorithm is a @math -approximation. This implies constant-factor approximations for @math -medians (when @math ), and @math -means (when @math ), and an @math -approximation algorithm for the @math -center problem (which is essentially @math ).
The @math -median problem seeks to find facilities @math with @math to minimize @math . The first constant factor approximation for the k-median problem was given by @cite_4 , which was subsequently improved by @cite_20 and @cite_2 to the current best factor of @math . It is known that the natural LP relaxation for the problem has an integrality gap of @math , but the currently-known algorithm that achieves this does not run in polynomial time @cite_26 . The extension of @math -median to the case when one can open at most @math facilities, but also has to pay their facility opening cost was studied by @cite_12 , who gave a @math -approximation.
{ "cite_N": [ "@cite_26", "@cite_4", "@cite_2", "@cite_20", "@cite_12" ], "mid": [ "1516429886", "2085751730", "2003207175", "2033647570", "2142133329" ], "abstract": [ "This work gives new insight into two well-known approximation algorithms for the uncapacitated facility location problem: the primal-dual algorithm of Jain & Vazirani, and an algorithm of Mettu & Plaxton. Our main result answers positively a question posed by Jain & Vazirani of whether their algorithm can be modified to attain a desired “continuity” property. This yields an upper bound of 3 on the integrality gap of the natural LP relaxation of the k-median problem, but our approach does not yield a polynomial time algorithm with this guarantee. We also give a new simple proof of the performance guarantee of the Mettu-Plaxton algorithm using LP duality, which suggests a minor modification of the algorithm that makes it Lagrangian-multiplier preserving.", "We present the first constant-factor approximation algorithm for the metric k-median problem. The k-median problem is one of the most well-studied clustering problems, i.e., those problems in which the aim is to partition a given set of points into clusters so that the points within a cluster are relatively close with respect to some measure. For the metric k-median problem, we are given n points in a metric space. We select k of these to be cluster centers and then assign each point to its closest selected center. If point j is assigned to a center i, the cost incurred is proportional to the distance between i and j. The goal is to select the k centers that minimize the sum of the assignment costs. We give a 62 3-approximation algorithm for this problem. This improves upon the best previously known result of O(log k log log k), which was obtained by refining and derandomizing a randomized O(log n log log n)-approximation algorithm of Bartal.", "We analyze local search heuristics for the metric k-median and facility location problems. We define the locality gap of a local search procedure for a minimization problem as the maximum ratio of a locally optimum solution (obtained using this procedure) to the global optimum. For k-median, we show that local search with swaps has a locality gap of 5. Furthermore, if we permit up to p facilities to be swapped simultaneously, then the locality gap is 3+2 p. This is the first analysis of a local search for k-median that provides a bounded performance guarantee with only k medians. This also improves the previous known 4 approximation for this problem. For uncapacitated facility location, we show that local search, which permits adding, dropping, and swapping a facility, has a locality gap of 3. This improves the bound of 5 given by M. Korupolu, C. Plaxton, and R. Rajaraman [Analysis of a Local Search Heuristic for Facility Location Problems, Technical Report 98-30, DIMACS, 1998]. We also consider a capacitated facility location problem where each facility has a capacity and we are allowed to open multiple copies of a facility. For this problem we introduce a new local search operation which opens one or more copies of a facility and drops zero or more facilities. We prove that this local search has a locality gap between 3 and 4.", "We present improved combinatorial approximation algorithms for the uncapacitated facility location problem. Two central ideas in most of our results are cost scaling and greedy improvement. We present a simple greedy local search algorithm which achieves an approximation ratio of @math in @math time. This also yields a bicriteria approximation tradeoff of @math for facility cost versus service cost which is better than previously known tradeoffs and close to the best possible. Combining greedy improvement and cost scaling with a recent primal-dual algorithm for facility location due to Jain and Vazirani, we get an approximation ratio of @math in @math time. This is very close to the approximation guarantee of the best known algorithm which is linear programming (LP)-based. Further, combined with the best known LP-based algorithm for facility location, we get a very slight improvement in the approximation factor for facility location, achieving @math . We also consider a variant of the capacitated facility location problem and present improved approximation algorithms for this.", "In this paper, we define a network service provider game. We show that the price of anarchy of the defined game can be bounded by analyzing a local search heuristic for a related facility location problem called the k-facility location problem. As a result, we show that the k-facility location problem has a locality gap of 5. This result is of interest on its own. Our result gives evidence to the belief that the price of anarchy of certain games are related to analysis of local search heuristics." ] }
0809.2554
1529111168
We study local search algorithms for metric instances of facility location problems: the uncapacitated facility location problem (UFL), as well as uncapacitated versions of the @math -median, @math -center and @math -means problems. All these problems admit natural local search heuristics: for example, in the UFL problem the natural moves are to open a new facility, close an existing facility, and to swap a closed facility for an open one; in @math -medians, we are allowed only swap moves. The local-search algorithm for @math -median was analyzed by (SIAM J. Comput. 33(3):544-562, 2004), who used a clever coupling'' argument to show that local optima had cost at most constant times the global optimum. They also used this argument to show that the local search algorithm for UFL was 3-approximation; their techniques have since been applied to other facility location problems. In this paper, we give a proof of the @math -median result which avoids this coupling argument. These arguments can be used in other settings where the arguments have been used. We also show that for the problem of opening @math facilities @math to minimize the objective function @math , the natural swap-based local-search algorithm is a @math -approximation. This implies constant-factor approximations for @math -medians (when @math ), and @math -means (when @math ), and an @math -approximation algorithm for the @math -center problem (which is essentially @math ).
The @math -means problem minimizes @math , and is widely used for clustering in machine learning, especially when the point set is in the Euclidean space. For Euclidean instances, one can obtain @math -approximations in linear time, if one imagines @math and @math to be constants: see @cite_7 and the references therein. The most commonly used algorithm in practice is Lloyd's algorithm, which is a local-search procedure different from ours, and which is a special case of the EM algorithm @cite_27 . While there is no explicit mention of an approximation algorithm with provable guarantees for @math -means (to the best of our knowledge), many of the constant-factor approximations for @math -median can be extended to the @math -means problem as well. The paper of @cite_8 is closely related to ours: it analyzes the same local search algorithm we consider, and uses properties of k-means in Euclidean spaces to obtain a @math -approximation. Our results for hold for general metrics, and can essentially be viewed as extensions of their results.
{ "cite_N": [ "@cite_27", "@cite_7", "@cite_8" ], "mid": [ "2150593711", "2110105238", "2199495299" ], "abstract": [ "It has long been realized that in pulse-code modulation (PCM), with a given ensemble of signals to handle, the quantum values should be spaced more closely in the voltage regions where the signal amplitude is more likely to fall. It has been shown by Panter and Dite that, in the limit as the number of quanta becomes infinite, the asymptotic fractional density of quanta per unit voltage should vary as the one-third power of the probability density per unit voltage of signal amplitudes. In this paper the corresponding result for any finite number of quanta is derived; that is, necessary conditions are found that the quanta and associated quantization intervals of an optimum finite quantization scheme must satisfy. The optimization criterion used is that the average quantization noise power be a minimum. It is shown that the result obtained here goes over into the Panter and Dite result as the number of quanta become large. The optimum quautization schemes for 2^ b quanta, b=1,2, , 7 , are given numerically for Gaussian and for Laplacian distribution of signal amplitudes.", "We present the first linear time (1 + spl epsiv )-approximation algorithm for the k-means problem for fixed k and spl epsiv . Our algorithm runs in O(nd) time, which is linear in the size of the input. Another feature of our algorithm is its simplicity - the only technique involved is random sampling.", "In k-means clustering we are given a set of n data points in d-dimensional space Rd and an integer k, and the problem is to determine a set of k points in &#211C;d, called centers, to minimize the mean squared distance from each data point to its nearest center. No exact polynomial-time algorithms are known for this problem. Although asymptotically efficient approximation algorithms exist, these algorithms are not practical due to the extremely high constant factors involved. There are many heuristics that are used in practice, but we know of no bounds on their performance.We consider the question of whether there exists a simple and practical approximation algorithm for k-means clustering. We present a local improvement heuristic based on swapping centers in and out. We prove that this yields a (9+e)-approximation algorithm. We show that the approximation factor is almost tight, by giving an example for which the algorithm achieves an approximation factor of (9-e). To establish the practical value of the heuristic, we present an empirical study that shows that, when combined with Lloyd's algorithm, this heuristic performs quite well in practice." ] }
0809.2554
1529111168
We study local search algorithms for metric instances of facility location problems: the uncapacitated facility location problem (UFL), as well as uncapacitated versions of the @math -median, @math -center and @math -means problems. All these problems admit natural local search heuristics: for example, in the UFL problem the natural moves are to open a new facility, close an existing facility, and to swap a closed facility for an open one; in @math -medians, we are allowed only swap moves. The local-search algorithm for @math -median was analyzed by (SIAM J. Comput. 33(3):544-562, 2004), who used a clever coupling'' argument to show that local optima had cost at most constant times the global optimum. They also used this argument to show that the local search algorithm for UFL was 3-approximation; their techniques have since been applied to other facility location problems. In this paper, we give a proof of the @math -median result which avoids this coupling argument. These arguments can be used in other settings where the arguments have been used. We also show that for the problem of opening @math facilities @math to minimize the objective function @math , the natural swap-based local-search algorithm is a @math -approximation. This implies constant-factor approximations for @math -medians (when @math ), and @math -means (when @math ), and an @math -approximation algorithm for the @math -center problem (which is essentially @math ).
Tight bounds for the @math -center problem are known: there is a @math -approximation algorithm due to @cite_18 @cite_10 , and this is tight unless @math .
{ "cite_N": [ "@cite_18", "@cite_10" ], "mid": [ "1973264045", "2044028871" ], "abstract": [ "The problem of clustering a set of points so as to minimize the maximum intercluster distance is studied. An O(kn) approximation algorithm, where n is the number of points and k is the number of clusters, that guarantees solutions with an objective function value within two times the optimal solution value is presented. This approximation algorithm succeeds as long as the set of points satisfies the triangular inequality. We also show that our approximation algorithm is best possible, with respect to the approximation bound, if PZ NP.", "In this paper a powerful, and yet simple, technique for devising approximation algorithms for a wide variety of NP-complete problems in routing, location, and communication network design is investigated. Each of the algorithms presented here delivers an approximate solution guaranteed to be within a constant factor of the optimal solution. In addition, for several of these problems we can show that unless P = NP, there does not exist a polynomial-time algorithm that has a better performance guarantee." ] }
0809.2554
1529111168
We study local search algorithms for metric instances of facility location problems: the uncapacitated facility location problem (UFL), as well as uncapacitated versions of the @math -median, @math -center and @math -means problems. All these problems admit natural local search heuristics: for example, in the UFL problem the natural moves are to open a new facility, close an existing facility, and to swap a closed facility for an open one; in @math -medians, we are allowed only swap moves. The local-search algorithm for @math -median was analyzed by (SIAM J. Comput. 33(3):544-562, 2004), who used a clever coupling'' argument to show that local optima had cost at most constant times the global optimum. They also used this argument to show that the local search algorithm for UFL was 3-approximation; their techniques have since been applied to other facility location problems. In this paper, we give a proof of the @math -median result which avoids this coupling argument. These arguments can be used in other settings where the arguments have been used. We also show that for the problem of opening @math facilities @math to minimize the objective function @math , the natural swap-based local-search algorithm is a @math -approximation. This implies constant-factor approximations for @math -medians (when @math ), and @math -means (when @math ), and an @math -approximation algorithm for the @math -center problem (which is essentially @math ).
For the uncapacitated metric facility location (UFL) problem, the first constant factor approximation was given by @cite_21 ; subsequent approximation algorithms and hardness results have been given by @cite_21 @cite_3 @cite_17 @cite_13 @cite_0 @cite_11 @cite_20 @cite_19 @cite_5 @cite_9 @cite_16 @cite_1 @cite_20 @cite_2 @cite_6 . It remains a tantalizing problem to close the gap between the best known approximation factor of @math @cite_22 , and the hardness result of @math @cite_14 .
{ "cite_N": [ "@cite_11", "@cite_14", "@cite_22", "@cite_9", "@cite_21", "@cite_1", "@cite_3", "@cite_6", "@cite_0", "@cite_19", "@cite_2", "@cite_5", "@cite_16", "@cite_13", "@cite_20", "@cite_17" ], "mid": [ "", "2101622070", "2949839236", "", "", "", "", "", "", "2107390991", "2003207175", "", "", "", "2033647570", "113414296" ], "abstract": [ "", "A fundamental facility location problem is to choose the location of facilities, such as industrial plants and warehouses, to minimize the cost of satisfying the demand for some commodity. There are associated costs for locating the facilities, as well as transportation costs for distributing the commodities. We assume that the transportation costs form a metric. This problem is commonly referred to as theuncapacitated facility locationproblem. Application to bank account location and clustering, as well as many related pieces of work, are discussed by Cornuejols, Nemhauser, and Wolsey. Recently, the first constant factor approximation algorithm for this problem was obtained by Shmoys, Tardos, and Aardal. We show that a simple greedy heuristic combined with the algorithm by Shmoys, Tardos, and Aardal, can be used to obtain an approximation guarantee of 2.408. We discuss a few variants of the problem, demonstrating better approximation factors for restricted versions of the problem. We also show that the problem is max SNP-hard. However, the inapproximability constants derived from the max SNP hardness are very close to one. By relating this problem to Set Cover, we prove a lower bound of 1.463 on the best possible approximation ratio, assumingNP?DTIMEnO(loglogn)].", "We obtain a 1.5-approximation algorithm for the metric uncapacitated facility location problem (UFL), which improves on the previously best known 1.52-approximation algorithm by Mahdian, Ye and Zhang. Note, that the approximability lower bound by Guha and Khuller is 1.463. An algorithm is a ( @math , @math )-approximation algorithm if the solution it produces has total cost at most @math , where @math and @math are the facility and the connection cost of an optimal solution. Our new algorithm, which is a modification of the @math -approximation algorithm of Chudak and Shmoys, is a (1.6774,1.3738)-approximation algorithm for the UFL problem and is the first one that touches the approximability limit curve @math established by Jain, Mahdian and Saberi. As a consequence, we obtain the first optimal approximation algorithm for instances dominated by connection costs. When combined with a (1.11,1.7764)-approximation algorithm proposed by , and later analyzed by , we obtain the overall approximation guarantee of 1.5 for the metric UFL problem. We also describe how to use our algorithm to improve the approximation ratio for the 3-level version of UFL.", "", "", "", "", "", "", "We develop a general method for turning a primal-dual algorithm into a group strategy proof cost-sharing mechanism. We use our method to design approximately budget balanced cost sharing mechanisms for two NP-complete problems: metric facility location, and single source rent-or-buy network design. Both mechanisms are competitive, group strategyproof and recover a constant fraction of the cost. For the facility location game our cost-sharing method recovers a 1 3rd of the total cost, while in the network design game the cost shares pay for a 1 15 fraction of the cost of the solution.", "We analyze local search heuristics for the metric k-median and facility location problems. We define the locality gap of a local search procedure for a minimization problem as the maximum ratio of a locally optimum solution (obtained using this procedure) to the global optimum. For k-median, we show that local search with swaps has a locality gap of 5. Furthermore, if we permit up to p facilities to be swapped simultaneously, then the locality gap is 3+2 p. This is the first analysis of a local search for k-median that provides a bounded performance guarantee with only k medians. This also improves the previous known 4 approximation for this problem. For uncapacitated facility location, we show that local search, which permits adding, dropping, and swapping a facility, has a locality gap of 3. This improves the bound of 5 given by M. Korupolu, C. Plaxton, and R. Rajaraman [Analysis of a Local Search Heuristic for Facility Location Problems, Technical Report 98-30, DIMACS, 1998]. We also consider a capacitated facility location problem where each facility has a capacity and we are allowed to open multiple copies of a facility. For this problem we introduce a new local search operation which opens one or more copies of a facility and drops zero or more facilities. We prove that this local search has a locality gap between 3 and 4.", "", "", "", "We present improved combinatorial approximation algorithms for the uncapacitated facility location problem. Two central ideas in most of our results are cost scaling and greedy improvement. We present a simple greedy local search algorithm which achieves an approximation ratio of @math in @math time. This also yields a bicriteria approximation tradeoff of @math for facility cost versus service cost which is better than previously known tradeoffs and close to the best possible. Combining greedy improvement and cost scaling with a recent primal-dual algorithm for facility location due to Jain and Vazirani, we get an approximation ratio of @math in @math time. This is very close to the approximation guarantee of the best known algorithm which is linear programming (LP)-based. Further, combined with the best known LP-based algorithm for facility location, we get a very slight improvement in the approximation factor for facility location, achieving @math . We also consider a variant of the capacitated facility location problem and present improved approximation algorithms for this.", "We design a new approximation algorithm for the metric uncapacitated facility location problem. This algorithm is of LP rounding type and is based on a rounding technique developed in [5,6,7]." ] }
0809.2851
1680755002
In previous research it has been shown that link-based web page metrics can be used to predict experts' assessment of quality. We are interested in a related question: do expert rankings of real-world entities correlate with search engine rankings of corresponding web resources? For example, each year US News & World Report publishes a list of (among others) top 50 graduate business schools. Does their expert ranking correlate with the search engine ranking of the URLs of those business schools? To answer this question we conducted 9 experiments using 8 expert rankings on a range of academic, athletic, financial and popular culture topics. We compared the expert rankings with the rankings in Google, Live Search (formerly MSN) and Yahoo (with list lengths of 10, 25, and 50). In 57 search engine vs. expert comparisons, only 1 strong and 4 moderate correlations were statistically significant. In 42 inter-search engine comparisons, only 2 strong and 4 moderate correlations were statistically significant. The correlations appeared to decrease with the size of the lists: the 3 strong correlations were for lists of 10, the 8 moderate correlations were for lists of 25, and no correlations were found for lists of 50.
Does Authority' mean Quality?'' is the question @cite_7 asked when they evaluated the potential of link- and content-based algorithms to identify high quality web pages. Human experts rated web documents from the Yahoo directory related to five popular topics by their quality. found a high correlation between the rankings of the human experts leading to the conclusion that there is a common notion of quality. By computing link-based metrics as well as analyzing the link neighborhood of the web pages from their dataset they were able to evaluate the performance of machine ranking methods. Here too they found a high correlation between in-degree, Kleinberg's authority score @cite_5 and PageRank. They isolated the documents that the human experts rated with good quality and evaluated the performance of algorithms on that list in terms of precision at @math and at @math . In-degree e.g., has a precision at @math of @math which means on average almost @math of the first @math documents it returns would be rated good by the experts. In general they find that in-degree, authority score and PageRank are all highly correlated with rankings provided by experts. Thus, web document quality can be estimated with hyperlink based metrics.
{ "cite_N": [ "@cite_5", "@cite_7" ], "mid": [ "2138621811", "2166227910" ], "abstract": [ "The network structure of a hyperlinked environment can be a rich source of information about the content of the environment, provided we have effective means for understanding it. We develop a set of algorithmic tools for extracting information from the link structures of such environments, and report on experiments that demonstrate their effectiveness in a variety of context on the World Wide Web. The central issue we address within our framework is the distillation of broad search topics, through the discovery of “authorative” information sources on such topics. We propose and test an algorithmic formulation of the notion of authority, based on the relationship between a set of relevant authoritative pages and the set of “hub pages” that join them together in the link structure. Our formulation has connections to the eigenvectors of certain matrices associated with the link graph; these connections in turn motivate additional heuristrics for link-based analysis.", "For many topics, the World Wide Web contains hundreds or thousands of relevant documents of widely varying quality. Users face a daunting challenge in identifying a small subset of documents worthy of their attention. Link analysis algorithms have received much interest recently, in large part for their potential to identify high quality items. We report here on an experimental evaluation of this potential. We evaluated a number of link and content-based algorithms using a dataset of web documents rated for quality by human topic experts. Link-based metrics did a good job of picking out high-quality items. Precision at 5 is about 0.75, and precision at 10 is about 0.55; this is in a dataset where 0.32 of all documents were of high quality. Surprisingly, a simple content-based metric performed nearly as well; ranking documents by the total number of pages on their containing site." ] }
0809.2851
1680755002
In previous research it has been shown that link-based web page metrics can be used to predict experts' assessment of quality. We are interested in a related question: do expert rankings of real-world entities correlate with search engine rankings of corresponding web resources? For example, each year US News & World Report publishes a list of (among others) top 50 graduate business schools. Does their expert ranking correlate with the search engine ranking of the URLs of those business schools? To answer this question we conducted 9 experiments using 8 expert rankings on a range of academic, athletic, financial and popular culture topics. We compared the expert rankings with the rankings in Google, Live Search (formerly MSN) and Yahoo (with list lengths of 10, 25, and 50). In 57 search engine vs. expert comparisons, only 1 strong and 4 moderate correlations were statistically significant. In 42 inter-search engine comparisons, only 2 strong and 4 moderate correlations were statistically significant. The correlations appeared to decrease with the size of the lists: the 3 strong correlations were for lists of 10, the 8 moderate correlations were for lists of 25, and no correlations were found for lists of 50.
Upstill, Craswell and Hawking @cite_8 studied the PageRank and indegree of URLs for Fortune 500 and Fortune Most Admired companies. They found companies on those lists averaged 1 point more PageRank (via the Google toolbar's self-reported 0-10 scale) than companies on the list. They also found that IT companies typically had higher PageRank than non-IT companies. Similar to @cite_7 , they found indegree highly correlated with PageRank.
{ "cite_N": [ "@cite_7", "@cite_8" ], "mid": [ "2166227910", "135168798" ], "abstract": [ "For many topics, the World Wide Web contains hundreds or thousands of relevant documents of widely varying quality. Users face a daunting challenge in identifying a small subset of documents worthy of their attention. Link analysis algorithms have received much interest recently, in large part for their potential to identify high quality items. We report here on an experimental evaluation of this potential. We evaluated a number of link and content-based algorithms using a dataset of web documents rated for quality by human topic experts. Link-based metrics did a good job of picking out high-quality items. Precision at 5 is about 0.75, and precision at 10 is about 0.55; this is in a dataset where 0.32 of all documents were of high quality. Surprisingly, a simple content-based metric performed nearly as well; ranking documents by the total number of pages on their containing site.", "Measures based on the Link Recommendation Assumption are hypothesised to help modern Web search engines rank ‘important, high quality’ pages ahead of relevant but less valuable pages and to reject ‘spam’. We tested these hypotheses using inlink counts and PageRank scores readily obtainable from search engines Google and Fast. We found that the average Google-reported PageRank of websites operated by Fortune 500 companies was approximately one point higher than the average for a large selection of companies. The same was true for Fortune Most Admired companies. A substantially bigger difference was observed in favour of companies with famous brands. Investigating less desirable biases, we found a one point bias toward technology companies, and a two point bias in favour of IT companies listed in the Wired 40. We found negligible bias in favour of US companies. Log of indegree was highly correlated with Google-reported PageRank scores, and just as effective when predicting desirable company attributes. Further, we found that PageRank scores for sites within a known spam network were no lower than would be expected on the basis of their indegree. We encounter no compelling evidence to support the use of PageRank over indegree." ] }
0809.2851
1680755002
In previous research it has been shown that link-based web page metrics can be used to predict experts' assessment of quality. We are interested in a related question: do expert rankings of real-world entities correlate with search engine rankings of corresponding web resources? For example, each year US News & World Report publishes a list of (among others) top 50 graduate business schools. Does their expert ranking correlate with the search engine ranking of the URLs of those business schools? To answer this question we conducted 9 experiments using 8 expert rankings on a range of academic, athletic, financial and popular culture topics. We compared the expert rankings with the rankings in Google, Live Search (formerly MSN) and Yahoo (with list lengths of 10, 25, and 50). In 57 search engine vs. expert comparisons, only 1 strong and 4 moderate correlations were statistically significant. In 42 inter-search engine comparisons, only 2 strong and 4 moderate correlations were statistically significant. The correlations appeared to decrease with the size of the lists: the 3 strong correlations were for lists of 10, the 8 moderate correlations were for lists of 25, and no correlations were found for lists of 50.
Bharat and Mihaila @cite_15 propose a ranking scheme based on authority where the most authoritative pages get the highest ranking. Their algorithm is based on a special set of expert documents'' which are defined as web pages about a certain topic with many links to non-affiliated web pages on that topic. Non-affiliated pages are pages from different domains and with sufficiently different IP address. These expert documents are not chosen manually but automatically picked as long as they meet certain requirements (sufficient out-degree, etc). In response to a user query the most relevant expert documents are isolated. The proposed scheme locates relevant links within the expert documents and follows them to identify target pages. These pages are finally ranked according to the number and relevance of expert documents pointing to them and presented to the end user. Bharat and Mihaila evaluated their algorithm against three commercial search engines and found that it performs either just as good or in some cases even better than the top search engine when it comes to locating the home page of a specific topic. The same is true for discovering relevant pages to topic (where many good pages exist).
{ "cite_N": [ "@cite_15" ], "mid": [ "2088546029" ], "abstract": [ "Some methods for rank correlation in evaluation are examined and their relative advantages and disadvantages are discussed. In particular, it is suggested that different test statistics should be used for providing additional information about the experiments other that the one provided by statistical significance testing. Kendall's τ is often used for testing-rank correlation, yet it is little appropriate if the objective of the test is different from what τ was designed for. In particular, attention should be paid to the null hypothesis. Other measures for rank correlation are described. If one test statistic suggests to reject a hypothesis, other test statistics should be used to support or to revise the decision. The paper then focuses on rank correlation between webpage lists ordered by PageRank for applying the general reflections on these test statistics. An interpretation of PageRank behaviour is provided on the basis of the discussion of the test statistics for rank correlation." ] }
0809.2851
1680755002
In previous research it has been shown that link-based web page metrics can be used to predict experts' assessment of quality. We are interested in a related question: do expert rankings of real-world entities correlate with search engine rankings of corresponding web resources? For example, each year US News & World Report publishes a list of (among others) top 50 graduate business schools. Does their expert ranking correlate with the search engine ranking of the URLs of those business schools? To answer this question we conducted 9 experiments using 8 expert rankings on a range of academic, athletic, financial and popular culture topics. We compared the expert rankings with the rankings in Google, Live Search (formerly MSN) and Yahoo (with list lengths of 10, 25, and 50). In 57 search engine vs. expert comparisons, only 1 strong and 4 moderate correlations were statistically significant. In 42 inter-search engine comparisons, only 2 strong and 4 moderate correlations were statistically significant. The correlations appeared to decrease with the size of the lists: the 3 strong correlations were for lists of 10, the 8 moderate correlations were for lists of 25, and no correlations were found for lists of 50.
@cite_12 observe a rich-get-richer'' phenomenon where popular pages tend to get even more popular since search engines repeatedly return popular pages first. As other studies by Cho @cite_10 @cite_21 and Baeza-Yates @cite_19 have shown, PageRank is significantly biased against new (and thus unpopular) pages which makes it problematic for these pages to draw the user's attention even if they are potentially of high quality. That means the popularity of a page can be much lower than its actual quality. propose page quality as an alternative ranking method. By defining quality of a web page as the probability that a user likes the page when seeing it for the first time the authors claim to be able to alleviate the drawbacks of PageRank. With the intuition from PageRank that a user that likes the page will link to it the algorithm is able to identify new and high quality pages much faster than PageRank and thus shorten the time it takes for them to get noticed.
{ "cite_N": [ "@cite_19", "@cite_21", "@cite_10", "@cite_12" ], "mid": [ "2080676333", "", "2166743161", "2134308336" ], "abstract": [ "In this short paper we estimate the size of the public indexable web at 11.5 billion pages. We also estimate the overlap and the index size of Google, MSN, Ask Teoma and Yahoo!", "", "Recent studies show that a majority of Web page accesses are referred by search engines. In this paper we study the widespread use of Web search engines and its impact on the ecology of the Web. In particular, we study how much impact search engines have on the popularity evolution of Web pages. For example, given that search engines return currently popular\" pages at the top of search results, are we somehow penalizing newly created pages that are not very well known yet? Are popular pages getting even more popular and new pages completely ignored? We first show that this unfortunate trend indeed exists on the Web through an experimental study based on real Web data. We then analytically estimate how much longer it takes for a new page to attract a large number of Web users when search engines return only popular pages at the top of search results. Our result shows that search engines can have an immensely worrisome impact on the discovery of new Web pages.", "In a number of recent studies [4, 8] researchers have found that because search engines repeatedly return currently popular pages at the top of search results, popular pages tend to get even more popular, while unpopular pages get ignored by an average user. This \"rich-get-richer\" phenomenon is particularly problematic for new and high-quality pages because they may never get a chance to get users' attention, decreasing the overall quality of search results in the long run. In this paper, we propose a new ranking function, called page quality that can alleviate the problem of popularity-based ranking. We first present a formal framework to study the search engine bias by discussing what is an \"ideal\" way to measure the intrinsic quality of a page. We then compare how PageRank, the current ranking metric used by major search engines, differs from this ideal quality metric. This framework will help us investigate the search engine bias in more concrete terms and provide clear understanding why PageRank is effective in many cases and exactly when it is problematic. We then propose a practical way to estimate the intrinsic page quality to avoid the inherent bias of PageRank. We derive our proposed quality estimator through a careful analysis of a reasonable web user model, and we present experimental results that show the potential of our proposed estimator. We believe that our quality estimator has the potential to alleviate the rich-get-richer phenomenon and help new and high-quality pages get the attention that they deserve." ] }
0809.2851
1680755002
In previous research it has been shown that link-based web page metrics can be used to predict experts' assessment of quality. We are interested in a related question: do expert rankings of real-world entities correlate with search engine rankings of corresponding web resources? For example, each year US News & World Report publishes a list of (among others) top 50 graduate business schools. Does their expert ranking correlate with the search engine ranking of the URLs of those business schools? To answer this question we conducted 9 experiments using 8 expert rankings on a range of academic, athletic, financial and popular culture topics. We compared the expert rankings with the rankings in Google, Live Search (formerly MSN) and Yahoo (with list lengths of 10, 25, and 50). In 57 search engine vs. expert comparisons, only 1 strong and 4 moderate correlations were statistically significant. In 42 inter-search engine comparisons, only 2 strong and 4 moderate correlations were statistically significant. The correlations appeared to decrease with the size of the lists: the 3 strong correlations were for lists of 10, the 8 moderate correlations were for lists of 25, and no correlations were found for lists of 50.
Lim et at. @cite_9 introduce two models to measure the quality of articles from an online community like Wikipedia without interpreting their content. In the basic model quality is derived from the authority of the contributors of the article and the contributions from each of them (in number of words). The peer review model extends the basic model by a review aspect of the article content. It gives higher quality to words that survive'' reviews.
{ "cite_N": [ "@cite_9" ], "mid": [ "2098930119" ], "abstract": [ "Using open source Web editing software (e.g., wiki), online community users can now easily edit, review and publish articles collaboratively. While much useful knowledge can be derived from these articles, content users and critics are often concerned about their qualities. In this paper, we develop two models, namely basic model and peer review model, for measuring the qualities of these articles and the authorities of their contributors. We represent collaboratively edited articles and their contributors in a bipartite graph. While the basic model measures an article?s quality using both the authorities of contributors and the amount of contribution from each contributor, the peer review model extends the former by considering the review aspect of article content. We present results of experiments conducted on some Wikipedia pages and their contributors. Our result show that the two models can effectively determine the articles? qualities and contributors? authorities using the collaborative nature of online communities." ] }
0809.2851
1680755002
In previous research it has been shown that link-based web page metrics can be used to predict experts' assessment of quality. We are interested in a related question: do expert rankings of real-world entities correlate with search engine rankings of corresponding web resources? For example, each year US News & World Report publishes a list of (among others) top 50 graduate business schools. Does their expert ranking correlate with the search engine ranking of the URLs of those business schools? To answer this question we conducted 9 experiments using 8 expert rankings on a range of academic, athletic, financial and popular culture topics. We compared the expert rankings with the rankings in Google, Live Search (formerly MSN) and Yahoo (with list lengths of 10, 25, and 50). In 57 search engine vs. expert comparisons, only 1 strong and 4 moderate correlations were statistically significant. In 42 inter-search engine comparisons, only 2 strong and 4 moderate correlations were statistically significant. The correlations appeared to decrease with the size of the lists: the 3 strong correlations were for lists of 10, the 8 moderate correlations were for lists of 25, and no correlations were found for lists of 50.
An approach to automatically predict information quality is given by @cite_13 . Analyzing news documents they observe an association between users quality score and the occurrence and prevalence of certain textual features like readability and grammar.
{ "cite_N": [ "@cite_13" ], "mid": [ "2048933106" ], "abstract": [ "We report here empirical results of a series of studies aimed at automatically predicting information quality in news documents. Multiple research methods and data analysis techniques enabled a good level of machine prediction of information quality. Procedures regarding user experiments and statistical analysis are described." ] }
0809.0124
2951317363
Recognizing analogies, synonyms, antonyms, and associations appear to be four distinct tasks, requiring distinct NLP algorithms. In the past, the four tasks have been treated independently, using a wide variety of algorithms. These four semantic classes, however, are a tiny sample of the full range of semantic phenomena, and we cannot afford to create ad hoc algorithms for each semantic phenomenon; we need to seek a unified approach. We propose to subsume a broad range of phenomena under analogies. To limit the scope of this paper, we restrict our attention to the subsumption of synonyms, antonyms, and associations. We introduce a supervised corpus-based machine learning algorithm for classifying analogous word pairs, and we show that it can solve multiple-choice SAT analogy questions, TOEFL synonym questions, ESL synonym-antonym questions, and similar-associated-both questions from cognitive psychology.
One of the tasks in SemEval 2007 was the classification of semantic relations between nominals @cite_21 . The problem is to classify semantic relations between nouns and noun compounds in the context of a sentence. The task attracted 14 teams who created 15 systems, all of which used supervised machine learning with features that were lexicon-based, corpus-based, or both.
{ "cite_N": [ "@cite_21" ], "mid": [ "2152358231" ], "abstract": [ "The NLP community has shown a renewed interest in deeper semantic analyses, among them automatic recognition of relations between pairs of words in a text. We present an evaluation task designed to provide a framework for comparing different approaches to classifying semantic relations between nominals in a sentence. This is part of SemEval, the 4th edition of the semantic evaluation event previously known as SensEval. We define the task, describe the training test data and their creation, list the participating systems and discuss their results. There were 14 teams who submitted 15 systems." ] }
0809.0257
2951806375
The 3- problem is also called the problem on 3-uniform hypergraphs. In this paper, we address kernelizations of the problem on 3-uniform hypergraphs. We show that this problem admits a linear kernel in three classes of 3-uniform hypergraphs. We also obtain lower and upper bounds on the kernel size for them by the parametric duality.
Buss @cite_7 has given a kernelization with kernel size @math for the problem on graphs by putting high degree elements'' into the cover. Similar to Buss' reduction, Niedermeier and Rossmanith @cite_9 have proposed a cubic-size kernelization for the problem on 3-uniform hypergraphs.
{ "cite_N": [ "@cite_9", "@cite_7" ], "mid": [ "2001159171", "1973852671" ], "abstract": [ "Given a collection C of subsets of size three of a finite set S and a positive integer k, the 3-Hitting Set problem is to determine a subset S' ⊆ S with |S'| ≤ k, so that S' contains at least one element from each subset in C. The problem is NP-complete, and is motivated, for example, by applications in computational biology. Improving previous work, we give an O(2.270k + n) time algorithm for 3-Hitting Set, which is efficient for small values of k, a typical occurrence in some applications. For d-Hitting Set we present an O(ck + n) time algorithm with c = d - 1 + O(d-1).", "Classes of machines using very limited amounts of nondeterminism are studied. The @math question is related to questions about classes lying within P. Complete sets for these classes are given." ] }
0809.0257
2951806375
The 3- problem is also called the problem on 3-uniform hypergraphs. In this paper, we address kernelizations of the problem on 3-uniform hypergraphs. We show that this problem admits a linear kernel in three classes of 3-uniform hypergraphs. We also obtain lower and upper bounds on the kernel size for them by the parametric duality.
Fellows . @cite_0 @cite_19 @cite_10 have introduced the crown reduction and obtained a @math -kernelization for the problem on graphs. Recently, Abu-Khzam @cite_6 has reduced further the kernel of this problem on 3-uniform hypergraphs to quadratic size by employing the crown reduction.
{ "cite_N": [ "@cite_0", "@cite_19", "@cite_10", "@cite_6" ], "mid": [ "75155422", "2141946535", "1518294541", "1836501071" ], "abstract": [ "", "The two objectives of this paper are: (1) to articulate three new general techniques for designing FPT algorithms, and (2) to apply these to obtain new FPT algorithms for Set Splitting and Vertex Cover. In the case of Set Splitting, we improve the best previous ( O ^*(72^k) ) FPT algorithm due to Dehne, Fellows and Rosamond [DFR03], to ( O ^*(8^k) ) by an approach based on greedy localization in conjunction with modeled crown reduction. In the case of Vertex Cover, we describe a new approach to 2k kernelization based on iterative compression and crown reduction, providing a potentially useful alternative to the Nemhauser-Trotter 2k kernelization.", "This survey reviews the basic notions of parameterized complexity, and describes some new approaches to designing FPT algorithms and problem reductions for graph problems.", "A kernelization algorithm for the 3-Hitting-Set problem is presented along with a general kernelization for d-Hitting-Set problems. For 3-Hitting-Set, a quadratic kernel is obtained by exploring properties of yes instances and employing what is known as crown reduction. Any 3-Hitting-Set instance is reduced into an equivalent instance that contains at most 5k2 + k elements (or vertices). This kernelization is an improvement over previously known methods that guarantee cubic-size kernels. Our method is used also to obtain a quadratic kernel for the Triangle Vertex Deletion problem. For a constant d ≥ 3, a kernelization of d-Hitting-Set is achieved by a generalization of the 3-Hitting-Set method, and guarantees a kernel whose order does not exceed (2d - 1)kd-1 + k." ] }
0809.0719
2953075422
This paper is concerned with the fast computation of Fourier integral operators of the general form @math , where @math is a frequency variable, @math is a phase function obeying a standard homogeneity condition, and @math is a given input. This is of interest for such fundamental computations are connected with the problem of finding numerical solutions to wave equations, and also frequently arise in many applications including reflection seismology, curvilinear tomography and others. In two dimensions, when the input and output are sampled on @math Cartesian grids, a direct evaluation requires @math operations, which is often times prohibitively expensive. This paper introduces a novel algorithm running in @math time, i. e. with near-optimal computational complexity, and whose overall structure follows that of the butterfly algorithm [Michielssen and Boag, IEEE Trans Antennas Propagat 44 (1996), 1086-1093]. Underlying this algorithm is a mathematical insight concerning the restriction of the kernel @math to subsets of the time and frequency domains. Whenever these subsets obey a simple geometric condition, the restricted kernel has approximately low-rank; we propose constructing such low-rank approximations using a special interpolation scheme, which prefactors the oscillatory component, interpolates the remaining nonoscillatory part and, lastly, remodulates the outcome. A byproduct of this scheme is that the whole algorithm is highly efficient in terms of memory requirement. Numerical results demonstrate the performance and illustrate the empirical properties of this algorithm.
Although FIOs play an important role in the analysis and computation of linear hyperbolic problems, the literature on fast computations of FIOs is surprisingly limited. The only work addressing in this general form is the article @cite_6 by the authors of the current paper. The operative feature in @cite_6 is an angular partitioning of the frequency domain into @math wedges, each with an opening angle equal to @math . When restricting the input to such a wedge, one can then factor the operator into a product of two simpler operators. The first operator is provably approximately low-rank (and lends itself to efficient computations) whereas the second one is a nonuniform Fourier transform which can be computed rapidly using the nonuniform fast Fourier transform (NFFT) @cite_31 @cite_29 @cite_35 . The resulting algorithm has an @math complexity.
{ "cite_N": [ "@cite_35", "@cite_31", "@cite_29", "@cite_6" ], "mid": [ "", "1981562337", "2010122118", "2063399239" ], "abstract": [ "", "Algorithms for the rapid computation of the forward and inverse discrete Fourier transform for points which are nonequispaced or whose number is unrestricted are presented. The computational procedure is based on approximation using a local Taylor series expansion and the fast Fourier transform (FFT). The forward transform for nonequispaced points is computed as the solution of a linear system involving the inverse Fourier transform. This latter system is solved using the iterative method GMRES with preconditioning. Numerical results are given to confirm the efficiency of the algorithms.", "A group of algorithms is presented generalizing the fast Fourier transform to the case of noninteger frequencies and nonequispaced nodes on the interval @math . The schemes of this paper are based on a combination of certain analytical considerations with the classical fast Fourier transform and generalize both the forward and backward FFTs. Each of the algorithms requires @math arithmetic operations, where @math is the precision of computations and N is the number of nodes. The efficiency of the approach is illustrated by several numerical examples.", "We introduce a general purpose algorithm for rapidly computing certain types of oscillatory integrals which frequently arise in problems connected to wave propagation, general hyperbolic equations, and curvilinear tomography. The problem is to numerically evaluate a so-called Fourier integral operator (FIO) of the form @math at points given on a Cartesian grid. Here, @math is a frequency variable, @math is the Fourier transform of the input @math , @math is an amplitude, and @math is a phase function, which is typically as large as @math ; hence the integral is highly oscillatory. Because a FIO is a dense matrix, a naive matrix vector product with an input given on a Cartesian grid of size @math by @math would require @math operations. This paper develops a new numerical algorithm which requires @math operations and as low as @math in storage space (the constants in front of these estimates are small). It operates by localizing the integral over polar wedges with small angular aperture in the frequency plane. On each wedge, the algorithm factorizes the kernel @math into two components: (1) a diffeomorphism which is handled by means of a nonuniform FFT and (2) a residual factor which is handled by numerical separation of the spatial and frequency variables. The key to the complexity and accuracy estimates is the fact that the separation rank of the residual kernel is provably independent of the problem size. Several numerical examples demonstrate the numerical accuracy and low computational complexity of the proposed methodology. We also discuss the potential of our ideas for various applications such as reflection seismology." ] }
0809.0719
2953075422
This paper is concerned with the fast computation of Fourier integral operators of the general form @math , where @math is a frequency variable, @math is a phase function obeying a standard homogeneity condition, and @math is a given input. This is of interest for such fundamental computations are connected with the problem of finding numerical solutions to wave equations, and also frequently arise in many applications including reflection seismology, curvilinear tomography and others. In two dimensions, when the input and output are sampled on @math Cartesian grids, a direct evaluation requires @math operations, which is often times prohibitively expensive. This paper introduces a novel algorithm running in @math time, i. e. with near-optimal computational complexity, and whose overall structure follows that of the butterfly algorithm [Michielssen and Boag, IEEE Trans Antennas Propagat 44 (1996), 1086-1093]. Underlying this algorithm is a mathematical insight concerning the restriction of the kernel @math to subsets of the time and frequency domains. Whenever these subsets obey a simple geometric condition, the restricted kernel has approximately low-rank; we propose constructing such low-rank approximations using a special interpolation scheme, which prefactors the oscillatory component, interpolates the remaining nonoscillatory part and, lastly, remodulates the outcome. A byproduct of this scheme is that the whole algorithm is highly efficient in terms of memory requirement. Numerical results demonstrate the performance and illustrate the empirical properties of this algorithm.
In a different direction, there has been a great amount of research on other types of oscillatory integral transforms. An important example is the discrete @math -body problem where one wants to evaluate sums of the form [ 1 j n q_j K(|x-x_j|), K(r) = e^ r r ] in the high-frequency regime ( @math is large). Such problems appear naturally when solving the Helmholtz equation by means of a boundary integral formulation @cite_1 @cite_17 . A popular approach seeks to compress the oscillatory integral operator by representing it in an appropriate basis such as a local Fourier basis, or a basis extracted from the wavelet packet dictionary @cite_36 @cite_12 @cite_13 @cite_25 . This representation sparsifies the operator, thus allowing fast matrix-vector products. In spite of having good theoretical estimates, this approach has thus far been practically limited to 1D boundaries. One particular issue with this approach is that the evaluation of the remaining nonnegligible coefficients sometimes requires assembling the entire matrix, which can be computationally rather expensive.
{ "cite_N": [ "@cite_36", "@cite_1", "@cite_13", "@cite_25", "@cite_12", "@cite_17" ], "mid": [ "1969569206", "", "", "2096240124", "1993700748", "1517670975" ], "abstract": [ "Abstract The integral ∫0Leiνφ(s,t)f(s)dswith a highly oscillatory kernel (large ν, ν is up to 2000) is considered. This integral is accurately evaluated with an improved trapezoidal rule and effectively transcribed using local Fourier basis and adaptive multiscale local Fourier basis. The representation of the oscillatory kernel in these bases is sparse. The coefficients after the application of local Fourier transform are smoothed. Sometimes this enables us to obtain further compression with wavelets.", "", "", "We examine the use of wavelet packets for the fast solution of integral equations with a highly oscillatory kernel. The redundancy of the wavelet packet transform allows the selection of a basis tailored to the problem at hand. It is shown that a well chosen wavelet packet basis is better suited to compress the discretized system than wavelets. The complexity of the matrix-vector product in an iterative solution method is then substantially reduced. A two-dimensional wavelet packet transform is derived and compared with a number of one-dimensional transforms that were presented earlier in literature. By means of some numerical experiments we illustrate the improved efficiency of the two-dimensional approach.", "Abstract We prove that certain oscillatory boundary integral operators occurring in acoustic scattering computations become sparse when represented in the appropriate local cosine transform orthonormal basis.", "Preface to the Classics Edition Preface Symbols 1. The Riesz-Fredholm theory for compact operators 2. Regularity properties of surface potentials 3. Boundary-value problems for the scalar Helmholtz equation 4. Boundary-value problems for the time-harmonic Maxwell equations and the vector Helmholtz equation 5. Low frequency behavior of solutions to boundary-value problems in scattering theory 6. The inverse scattering problem: exact data 7. Improperly posed problems and compact families 8. The determination of the shape of an obstacle from inexact far-field data 9. Optimal control problems in radiation and scattering theory References Index." ] }
0809.0719
2953075422
This paper is concerned with the fast computation of Fourier integral operators of the general form @math , where @math is a frequency variable, @math is a phase function obeying a standard homogeneity condition, and @math is a given input. This is of interest for such fundamental computations are connected with the problem of finding numerical solutions to wave equations, and also frequently arise in many applications including reflection seismology, curvilinear tomography and others. In two dimensions, when the input and output are sampled on @math Cartesian grids, a direct evaluation requires @math operations, which is often times prohibitively expensive. This paper introduces a novel algorithm running in @math time, i. e. with near-optimal computational complexity, and whose overall structure follows that of the butterfly algorithm [Michielssen and Boag, IEEE Trans Antennas Propagat 44 (1996), 1086-1093]. Underlying this algorithm is a mathematical insight concerning the restriction of the kernel @math to subsets of the time and frequency domains. Whenever these subsets obey a simple geometric condition, the restricted kernel has approximately low-rank; we propose constructing such low-rank approximations using a special interpolation scheme, which prefactors the oscillatory component, interpolates the remaining nonoscillatory part and, lastly, remodulates the outcome. A byproduct of this scheme is that the whole algorithm is highly efficient in terms of memory requirement. Numerical results demonstrate the performance and illustrate the empirical properties of this algorithm.
To the best of our knowledge, the most successful method for the Helmholtz kernel @math -body problem in both 2 and 3D is the high-frequency fast multipole method (HF-FMM) proposed by Rokhlin and his collaborators in a series of papers @cite_4 @cite_18 @cite_28 . This approach combines the analytic property of the Helmholtz kernel with an FFT-type fast algorithm to speedup the computation of the interaction between well-separated regions. If @math is the number of input and output points as before, the resulting algorithm has an @math computational complexity. Other algorithms using similar techniques can be found in @cite_21 @cite_34 @cite_38 @cite_0 .
{ "cite_N": [ "@cite_38", "@cite_18", "@cite_4", "@cite_28", "@cite_21", "@cite_0", "@cite_34" ], "mid": [ "1968359830", "1987397719", "2089210221", "1963750172", "", "2084385130", "1987724179" ], "abstract": [ "The solution of Helmholtz and Maxwell equations by integral formulations (kernel in exp( i kr) r ) leads to large dense linear systems. Using direct solvers requires large computational costs in O(N 3 ) . Using iterative solvers, the computational cost is reduced to large matrix–vector products. The fast multipole method provides a fast numerical way to compute convolution integrals. Its application to Maxwell and Helmholtz equations was initiated by Rokhlin, based on a multipole expansion of the interaction kernel. A second version, proposed by Chew, is based on a plane–wave expansion of the kernel. We propose a third approach, the stable–plane–wave expansion, which has a lower computational expense than the multipole expansion and does not have the accuracy and stability problems of the plane–wave expansion. The computational complexity is N log N as with the other methods.", "Abstract The diagonal forms are constructed for the translation operators for the Helmholtz equation in three dimensions. While the operators themselves have a fairly complicated structure (described somewhat incompletely by the classical addition theorems for the Bessel functions), their diagonal forms turn out to be quite simple. These diagonal forms are realized as generalized integrals, possess straightforward physical interpretations, and admit stable numerical implementation. This paper uses the obtained analytical apparatus to construct an algorithm for the rapid application to arbitrary vectors of matrices resulting from the discretization of integral equations of the potential theory for the Helmholtz equation in three dimensions. It is an extension to the three-dimensional case of the results of Rokhlin (J. Complexity4(1988), 12-32), where a similar apparatus is developed in the two-dimensional case.", "Abstract The present paper describes an algorithm for rapid solution of boundary value problems for the Helmholtz equation in two dimensions based on iteratively solving integral equations of scattering theory. CPU time requirements of previously published algorithms of this type are of the order n 2 , where n is the number of nodes in the discretization of the boundary of the scatterer. The CPU time requirements of the algorithm of the present paper are n 4 3 , and can be further reduced, making it considerably more practical for large scale problems.", "We describe a wideband version of the Fast Multipole Method for the Helmholtz equation in three dimensions. It unifies previously existing versions of the FMM for high and low frequencies into an algorithm which is accurate and efficient for any frequency, having a CPU time of O(N) if low-frequency computations dominate, or O(NlogN) if high-frequency computations dominate. The performance of the algorithm is illustrated with numerical examples.", "", "The fast multipole method (FMM) has been implemented to speed up the matrix-vector multiply when an iterative method is used to solve the combined field integral equation (CFIE). FMM reduces the complexity from O(N2) to O(N1.5). With a multilevel fast multipole algorithm (MLFMA), it is further reduced to O(N log N). A 110, 592-unknown problem can be solved within 24 h on a SUN Sparc 10. © 1995 John Wiley & Sons, Inc.", "We study integral methods applied to the resolution of the Maxwell equations where the linear system is solved using an iterative method which requires only matrix?vector products. The fast multipole method (FMM) is one of the most efficient methods used to perform matrix?vector products and accelerate the resolution of the linear system. A problem involving N degrees of freedom may be solved in CNiterNlogN floating operations, where C is a constant depending on the implementation of the method. In this article several techniques allowing one to reduce the constant C are analyzed. This reduction implies a lower total CPU time and a larger range of application of the FMM. In particular, new interpolation and anterpolation schemes are proposed which greatly improve on previous algorithms. Several numerical tests are also described. These confirm the efficiency and the theoretical complexity of the FMM." ] }
0809.0719
2953075422
This paper is concerned with the fast computation of Fourier integral operators of the general form @math , where @math is a frequency variable, @math is a phase function obeying a standard homogeneity condition, and @math is a given input. This is of interest for such fundamental computations are connected with the problem of finding numerical solutions to wave equations, and also frequently arise in many applications including reflection seismology, curvilinear tomography and others. In two dimensions, when the input and output are sampled on @math Cartesian grids, a direct evaluation requires @math operations, which is often times prohibitively expensive. This paper introduces a novel algorithm running in @math time, i. e. with near-optimal computational complexity, and whose overall structure follows that of the butterfly algorithm [Michielssen and Boag, IEEE Trans Antennas Propagat 44 (1996), 1086-1093]. Underlying this algorithm is a mathematical insight concerning the restriction of the kernel @math to subsets of the time and frequency domains. Whenever these subsets obey a simple geometric condition, the restricted kernel has approximately low-rank; we propose constructing such low-rank approximations using a special interpolation scheme, which prefactors the oscillatory component, interpolates the remaining nonoscillatory part and, lastly, remodulates the outcome. A byproduct of this scheme is that the whole algorithm is highly efficient in terms of memory requirement. Numerical results demonstrate the performance and illustrate the empirical properties of this algorithm.
Finally, the idea of butterfly computations has been applied to the @math -body problem in several ways. The original paper of Michielssen and Boag @cite_19 used this technique to accelerate the computation of the oscillatory interactions between well-separated regions. More recently, Engquist and Ying @cite_7 @cite_23 proposed a multidirectional solution to this problem, where part of the algorithm can be viewed as a butterfly computation between specially selected spatial subdomain.
{ "cite_N": [ "@cite_19", "@cite_23", "@cite_7" ], "mid": [ "2143962756", "1621423264", "2117762537" ], "abstract": [ "A multilevel algorithm is presented for analyzing scattering from electrically large surfaces. The algorithm accelerates the iterative solution of integral equations that arise in computational electromagnetics. The algorithm permits a fast matrix-vector multiplication by decomposing the traditional method of moment matrix into a large number of blocks, with each describing the interaction between distant scatterers. The multiplication of each block by a trial solution vector is executed using a multilevel scheme that resembles a fast Fourier transform (FFT) and that only relies on well-known algebraic techniques. The computational complexity and the memory requirements of the proposed algorithm are O(N log sup 2 N).", "This paper introduces a directional multiscale algorithm for the two dimensional @math -body problem of the Helmholtz kernel with applications to high frequency scattering. The algorithm follows the approach in [Engquist and Ying, SIAM Journal on Scientific Computing, 29 (4), 2007] where the three dimensional case was studied. The main observation is that, for two regions that follow a directional parabolic geometric configuration, the interaction between the points in these two regions through the Helmholtz kernel is approximately low rank. We propose an improved randomized procedure for generating the low rank representations. Based on these representations, we organize the computation of the far field interaction in a multidirectional and multiscale way to achieve maximum efficiency. The proposed algorithm is accurate and has the optimal @math complexity for problems from two dimensional scattering applications. We present numerical results for several test examples to illustrate the algorithm and its application to two dimensional high frequency scattering problems.", "This paper introduces a new directional multilevel algorithm for solving @math -body or @math -point problems with highly oscillatory kernels. These systems often result from the boundary integral formulations of scattering problems and are difficult due to the oscillatory nature of the kernel and the non-uniformity of the particle distribution. We address the problem by first proving that the interaction between a ball of radius @math and a well-separated region has an approximate low rank representation, as long as the well-separated region belongs to a cone with a spanning angle of @math and is at a distance which is at least @math away from from the ball. We then propose an efficient and accurate procedure which utilizes random sampling to generate such a separated, low rank representation. Based on the resulting representations, our new algorithm organizes the high frequency far field computation by a multidirectional and multiscale strategy to achieve maximum efficiency. The algorithm performs well on a large group of highly oscillatory kernels. Our algorithm is proved to have @math computational complexity for any given accuracy when the points are sampled from a two dimensional surface. We also provide numerical results to demonstrate these properties." ] }
0808.3971
2949251435
A clustered base transceiver station (BTS) coordination strategy is proposed for a large cellular MIMO network, which includes full intra-cluster coordination to enhance the sum rate and limited inter-cluster coordination to reduce interference for the cluster edge users. Multi-cell block diagonalization is used to coordinate the transmissions across multiple BTSs in the same cluster. To satisfy per-BTS power constraints, three combined precoder and power allocation algorithms are proposed with different performance and complexity tradeoffs. For inter-cluster coordination, the coordination area is chosen to balance fairness for edge users and the achievable sum rate. It is shown that a small cluster size (about 7 cells) is sufficient to obtain most of the sum rate benefits from clustered coordination while greatly relieving channel feedback requirement. Simulations show that the proposed coordination strategy efficiently reduces interference and provides a considerable sum rate gain for cellular MIMO networks.
, where neighboring BTSs cooperatively schedule their transmissions, is a practical strategy to reduce interference, as each time slot only one BTS in each cluster is transmitting and it only requires message change comparable to that for handoff. In @cite_21 , it was shown that one major advantage of intercell scheduling compared with conventional frequency reuse is the expanded multiuser diversity gain. The interference reduction is at the expense of a transmission duty cycle, however, and it does not make full use of the available spatial degrees of freedom.
{ "cite_N": [ "@cite_21" ], "mid": [ "2128192883" ], "abstract": [ "The capacity and robustness of cellular MIMO systems is very sensitive to other-cell interference which will in practice necessitate network level interference reduction strategies. As an alternative to traditional static frequency reuse patterns, this paper investigates intercell scheduling among neighboring base stations. We show analytically that cooperatively scheduled transmission, which is well within the capability of present systems, can achieve an expanded multiuser diversity gain in terms of ergodic capacity as well as almost the same amount of interference reduction as conventional frequency reuse. This capacity gain over conventional frequency reuse is O (M t square-root of log Ns) for dirty paper coding and O (min (Mr, Mt) square-root of log Ns) for time division, where Ns is the number of cooperating base stations employing opportunistic scheduling in an M t x M r MIMO system. From a theoretical standpoint, an interesting aspect of this analysis comes from an altered view of multiuser diversity in the context of a multi-cell system. Previously, multiuser diversity capacity gain has been known to grow as O(log log K), from selecting the maximum of K exponentially-distributed powers. Because multicell considerations such as the positions of the users, lognormal shadowing, and pathless affect the multiuser diversity gain, we find instead that the gain is O(square-root of 2logic K), from selecting the maximum of a compound Iognormal-exponential distribution. Finding the maximum of such a distribution is an additional contribution of the paper." ] }
0808.3971
2949251435
A clustered base transceiver station (BTS) coordination strategy is proposed for a large cellular MIMO network, which includes full intra-cluster coordination to enhance the sum rate and limited inter-cluster coordination to reduce interference for the cluster edge users. Multi-cell block diagonalization is used to coordinate the transmissions across multiple BTSs in the same cluster. To satisfy per-BTS power constraints, three combined precoder and power allocation algorithms are proposed with different performance and complexity tradeoffs. For inter-cluster coordination, the coordination area is chosen to balance fairness for edge users and the achievable sum rate. It is shown that a small cluster size (about 7 cells) is sufficient to obtain most of the sum rate benefits from clustered coordination while greatly relieving channel feedback requirement. Simulations show that the proposed coordination strategy efficiently reduces interference and provides a considerable sum rate gain for cellular MIMO networks.
Recently, has been proposed as an effective technique to mitigate interference in the downlink of multi-cell networks @cite_15 . By sharing information across BTSs and designing downlink signals cooperatively, signals from other cells may be used to assist the transmission instead of acting as interference, and the available degrees of freedom are fully utilized. In @cite_28 , BTS coordination with DPC was first proposed with single-antenna transmitters and receivers in each cell. BTS coordination in a downlink multi-cell MIMO network was studied in @cite_9 , with a per-BTS power constraint and various joint transmission schemes. The maximum achievable common rate in a coordinated network, with zero-forcing (ZF) and DPC, was studied in @cite_11 @cite_31 , which demonstrated a significant gain over the conventional single BTS transmission. With simplified network models, analytical results were derived for multi-cell ZF beamforming in @cite_34 and for various coordination strategies with grouped cell interior and edge users in @cite_41 . Studies considering practical issues such as limited-capacity backhaul and asynchronous interference can be found in @cite_42 @cite_43 @cite_4 @cite_32 .
{ "cite_N": [ "@cite_31", "@cite_4", "@cite_28", "@cite_41", "@cite_9", "@cite_42", "@cite_32", "@cite_43", "@cite_15", "@cite_34", "@cite_11" ], "mid": [ "2098776483", "2030956801", "2106612364", "1991331367", "2139771702", "", "2156262554", "2101653184", "", "", "" ], "abstract": [ "We quantify the ultimate performance limits of inter-cell coordinatation in a cellular downlink network. The goal is to achieve fairness by maximizing the minimum rate in the network subject to per base power constraints. We first solve the max-min rate problem for a particular zero-forcing dirty paper coding scheme so as to obtain an achievable max-min rate, which serves as a lower bound on the ultimate limit. We then obtain a simple upper bound on the max-min rate of any scheme, and show that the rate achievable by the zero-forcing dirty paper coding scheme is close to this upper bound. We also extend our analysis to coordinated networks with multiple antennas.", "In this contribution we present new achievable rates, for the non-fading uplink channel of a cellular network, with joint cell-site processing, where unlike previous results, the error-free backhaul network has finite capacity per-cell. Namely, the cell-sites are linked to the central joint processor via lossless links with finite capacity. The cellular network is modeled by the circular Wyner model, which yields closed form expressions for the achievable rates. For this idealistic model, we present achievable rates for cell-sites that use compress-and forward scheme, combined with local decoding, and inter-cell time-sharing. These rates are then demonstrated to be rather close to the optimal unlimited backhaul joint processing rates, already for modest backhaul capacities, supporting the potential gain offered by the joint cell-site processing approach.", "A linear pre-processing plus encoding scheme is proposed, which significantly enhances cellular downlink performance, while putting the complexity burden on the transmitting end. The approach is based on LQ factorization of the channel transfer matrix combined with the \"writing on dirty paper\" approach (Caire, G. and Shamai, S., Proc. 38th Annual Allerton Conference on Communication, Control and Computing, 2000) for eliminating the effect of uncorrelated interference, which is fully known at the transmitter but unknown at the receiver. The attainable average rates with the proposed scheme approach those of optimum joint processing at the high SNR region.", "We study the potential benefits of base-station (BS) cooperation for downlink transmission in multicell networks. Based on a modified Wyner-type model with users clustered at the cell-edges, we analyze the dirty-paper-coding (DPC) precoder and several linear precoding schemes, including cophasing, zero-forcing (ZF), and MMSE precoders. For the nonfading scenario with random phases, we obtain analytical performance expressions for each scheme. In particular, we characterize the high signal-to-noise ratio (SNR) performance gap between the DPC and ZF precoders in large networks, which indicates a singularity problem in certain network settings. Moreover, we demonstrate that the MMSE precoder does not completely resolve the singularity problem. However, by incorporating path gain fading, we numerically show that the singularity problem can be eased by linear precoding techniques aided with multiuser selection. By extending our network model to include cell-interior users, we determine the capacity regions of the two classes of users for various cooperative strategies. In addition to an outer bound and a baseline scheme, we also consider several locally cooperative transmission approaches. The resulting capacity regions show the tradeoff between the performance improvement and the requirement for BS cooperation, signal processing complexity, and channel state information at the transmitter (CSIT).", "Recently, the remarkable capacity potential of multiple-input multiple-output (MIMO) wireless communication systems was unveiled. The predicted enormous capacity gain of MIMO is nonetheless significantly limited by cochannel interference (CCI) in realistic cellular environments. The previously proposed advanced receiver technique improves the system performance at the cost of increased receiver complexity, and the achieved system capacity is still significantly away from the interference-free capacity upper bound, especially in environments with strong CCI. In this paper, base station cooperative processing is explored to address the CCI mitigation problem in downlink multicell multiuser MIMO networks, and is shown to dramatically increase the capacity with strong CCI. Both information-theoretic dirty paper coding approach and several more practical joint transmission schemes are studied with pooled and practical per-base power constraints, respectively. Besides the CCI mitigation potential, other advantages of cooperative processing including the power gain, channel rank conditioning advantage, and macrodiversity protection are also addressed. The potential of our proposed joint transmission schemes is verified with both heuristic and realistic cellular MIMO settings.", "", "Cooperative transmission by base stations (BSs) can significantly improve the spectral efficiency of multiuser, multi-cell, multiple input multiple output (MIMO) systems. We show that contrary to what is often assumed in the literature, the multiuser interference in such systems is fundamentally asynchronous. Intuitively, perfect timing-advance mechanisms can at best only ensure that the desired signal components -but not also the interference components -are perfectly aligned at their intended mobile stations. We develop an accurate mathematical model for the asynchronicity, and show that it leads to a significant performance degradation of existing designs that ignore the asynchronicity of interference. Using three previously proposed linear preceding design methods for BS cooperation, we develop corresponding algorithms that are better at mitigating the impact of the asynchronicity of the interference. Furthermore, we also address timing-advance inaccuracies (jitter), which are inevitable in a practical system. We show that using jitter-statistics-aware precoders can mitigate the impact of these inaccuracies as well. The insights of this paper are critical for the practical implementation of BS cooperation in multiuser MIMO systems, a topic that is typically oversimplified in the literature.", "It has recently been shown that multi-cell cooperations in cellular networks, enabling distributed antenna systems and joint transmission or joint detection across cell boundaries, can significantly increase capacity, especially that of users at cell borders. Such concepts, typically implicitly assuming unlimited information exchange between base stations, can also be used to increase the network fairness. In practical implementations, however, the large amounts of received signals that need to be quantized and transmitted via an additional backhaul between the involved cells to central processing points, will be a non-negligible issue. In this paper, we thus introduce an analytical framework to observe the uplink performance of cellular networks in which joint detection is only applied to a subset of selected users, aiming at achieving best possible capacity and fairness improvements under a strongly constrained backhaul between sites. This reveals a multi-dimensional optimization problem, where we propose a simple, heuristic algorithm that strongly narrows down and serializes the problem while still yielding a significant performance improvement.", "", "", "" ] }