src
stringlengths
100
132k
tgt
stringlengths
10
710
paper_id
stringlengths
3
9
title
stringlengths
9
254
discipline
dict
We present zero-knowledge proofs and arguments for arithmetic circuits overnite prime elds, namely given a circuit, show in zero-knowledge that inputs can be selected leading to a given output. For a eld GF (q), where q is an n-bit prime, a circuit of size O(n), and error probability 2 ?n , our protocols require communication of O(n 2 ) bits. This is the same worst-cast complexity as the trivial (non zero-knowledge) interactive proof where the prover just reveals the input values. If the circuit involves n multiplications, the best previously known methods would in general require communication of (n 3 log n) bits. Variations of the technique behind these protocols lead to other interesting applications. We rst look at the Boolean Circuit Satis ability problem and give zeroknowledge proofs and arguments for a circuit of size n and error probability 2 ?n in which there is an interactive preprocessing phase requiring communication of O(n 2 ) bits. In this phase, the statement to be proved later need not be known. Later the prover can non-interactively prove any circuit he wants, i.e. by sending only one message, of size O(n) bits. As a second application, we show that Shamirs (Shens) interactive proof system for the (IP-complete) QBF problem can be transformed to a zero-knowledge proof system with the same asymptotic communication complexity and number of rounds. The security of our protocols can be based on any one-way group homomorphism with a particular set of properties. We give examples of special assumptions su cient for this, including: the RSA assumption, hardness of discrete log in a prime order group, and polynomial security of Di e-Hellman encryption. We note that the constants involved in our asymptotic complexities are small enough for our protocols to be practical with realistic choices of parameters.
The methods of Cramer et al. REF lead to arguments with communication complexity linear in the size of the circuit.
14885611
Zero-Knowledge Proofs for Finite Field Arithmetic, or: Can Zero-Knowledge be for Free?
{ "venue": "IN PROC. CRYPTO", "journal": null, "mag_field_of_study": [ "Mathematics", "Computer Science" ] }
In this paper, we study the localization problem in large-scale underwater sensor networks. The adverse aqueous environments, the node mobility, and the large network scale all pose new challenges, and most current localization schemes are not applicable. We propose a hierarchical approach which divides the whole localization process into two sub-processes: anchor node localization and ordinary node localization. Many existing techniques can be used in the former. For the ordinary node localization process, we propose a distributed localization scheme which novelly integrates a 3-dimensional Euclidean distance estimation method with a recursive location estimation method. Simulation results show that our proposed solution can achieve high localization coverage with relatively small localization error and low communication overhead in large-scale 3-dimensional underwater sensor networks.
Additionally, Zhou et al. of "Localization for Large-Scale Underwater Sensor Networks" REF propose a localization scheme that approaches the problem in a range-based hierarchical manner.
2932735
Localization for large-scale underwater sensor networks
{ "venue": null, "journal": null, "mag_field_of_study": [ "Computer Science" ] }
Classical collaborative filtering, and content-based filtering methods try to learn a static recommendation model given training data. These approaches are far from ideal in highly dynamic recommendation domains such as news recommendation and computational advertisement, where the set of items and users is very fluid. In this work, we investigate an adaptive clustering technique for content recommendation based on exploration-exploitation strategies in contextual multi-armed bandit settings. Our algorithm takes into account the collaborative effects that arise due to the interaction of the users with the items, by dynamically grouping users based on the items under consideration and, at the same time, grouping items based on the similarity of the clusterings induced over the users. The resulting algorithm thus takes advantage of preference patterns in the data in a way akin to collaborative filtering methods. We provide an empirical analysis on medium-size real-world datasets, showing scalability and increased prediction performance (as measured by click-through rate) over state-of-the-art methods for clustering bandits. We also provide a regret analysis within a standard linear stochastic noise setting.
Collaborative filtering bandits REF is a similar technique which clusters the users based on context.
1743552
Collaborative Filtering Bandits
{ "venue": "SIGIR '16", "journal": null, "mag_field_of_study": [ "Computer Science", "Mathematics" ] }
Matching the demand for resources ("load") with the supply of resources ("capacity") is a basic problem occurring across many fields of engineering, logistics, and economics, and has been considered extensively both in the Internet and in wireless networks. The ongoing evolution of cellular communication networks into dense, organic, and irregular heterogeneous networks ("HetNets") has elevated load-awareness to a central problem, and introduces many new subtleties. This paper explains how several long-standing assumptions about cellular networks need to be rethought in the context of a load-balanced HetNet: we highlight these as three deeply entrenched myths that we then dispel. We survey and compare the primary technical approaches to HetNet load balancing: (centralized) optimization, game theory, Markov decision processes, and the newly popular cell range expansion (a.k.a. "biasing"), and draw design lessons for OFDMA-based cellular systems. We also identify several open areas for future exploration. Mobile networks are becoming increasingly complicated, with heterogeneity in many different design dimensions. For example, a typical smart phone can connect to the Internet via several different radio technologies, including 3G cellular (e.g. HSPA or EVDO), LTE, and several types of WiFi (e.g. 802.11g, n, or ac), with each of these utilizing several non-overlapping frequency bands. Cellular base stations (BSs) are also becoming increasingly diverse, with traditional macrocells often being shrunk to microcells, and further supplemented with picocells, distributed antennas, and femtocells. This myth is deeply entrenched in the fields of communication and information theory, and indeed, even in the "five bars" display on virtually every mobile phone in existence. It was true conventionally, and still is "instantaneously". For example, the probability of correct detection for a given constellation is monotonically related to the detection-time SINR (i.e. any residual interference not removed by the receiver is treated as noise), as any communication theory text confirms. Outage is also usually thought of in terms of a target SINR, namely the probability of being below it. Further, information theory tells us that achievable data rate follows B log(1 + SNR), or B log(1 + SINR) if the interference is modeled as Gaussian noise, where B is the bandwidth. Thus, increasing the data rate seems to come down to increasing SNR (or SINR) -which yields diminishing returns due to the log -or acquiring more bandwidth. The critical missing piece is the load on the BS, which provides a view of resource allocation over time. Modern wireless systems dynamically allocate resources on the timescale of a millisecond, so even a 100 msec window (about the minimum perceptual time window of a human) provides considerable averaging. In contrast, classical communication and information theory as in the previous paragraph provide only a "snapshot" of rate and reliability. But the user-perceived rate is their instantaneous rate multiplied by the fraction of resources (time/frequency slots) they are allowed to use, which for a typical scheduling regime (e.g. proportional fair or round robin) is about 1/K, where K is the number of other active users on that BS in that band. This is pretty intuitive: everyone has experienced large drops in throughput due to congestion at peak times or in crowded events, irrespective of signal quality, e.g. "I have five 1 Henceforth, we shall include WiFi APs as a type of BS: one using unlicensed spectrum and a contention-based 3 bars, why can't I send this text message?!" The technical challenge is that the load K varies both spatially and temporally and is thus impossible to determine a priori for a particular base station. It is often hard even to find a good model for the load K: it is clearly related to coverage area, as larger cells will typically have more active users, but also depends on other factors like the user distribution, traffic models, and other extrinsic factors. A main goal of this paper is to introduce some recent approaches to load-aware cellular network models, along with an appreciation for the limitations of load-blind models. 2 Myth 2: The "Spectrum Crunch" It is a nearly universal article of faith that the amount of electromagnetic spectrum allocated to wireless broadband applications is woefully inadequate. Rather, what we have is an infrastructure shortage, not a spectrum shortage. Nearly everyone agrees that small cells should be added at a rapid pace to ease network congestion, and that this will be the key element to moving towards 1000x. However, the small cells (micro, pico, femto) will be deployed opportunistically, irregularly, and in fixed locations, and have a certain amount of resources they can provide (i.e. spectrum and backhaul). In stark contrast, the devices they serve move around, and sporadically request extensive resources from the network, while at other times are dormant. Thus, the load offered to each base station varies dramatically over time and space. Thus, a small cell network will require much more proactive load balancing in order to make good use of the newly deployed infrastructure. Of course, despite the above myths, many others in both industry and academia have recognized the importance of including load in the analysis of rate. The unifying point is that the modeling and optimization of load should be elevated to have a similar status as the amount of spectrum or the SINR. However, doing so in a technically rigorous manner is not straightforward. Outside of communication systems, load balancing has long been studied as an approach to balance the workload across various servers (in networks) and machines (in manufacturing) in order to optimize quantities like resource utilization, fairness, waiting/processing delays, or throughput. In emerging wireless networks, due to the disparate transmit powers and base station capabilities, even with a fairly uniform user distribution, "natural" user association metrics like SINR or RSSI can lead to a major load imbalance. As an example, the disparity between a max SINR and an optimal (sum log rate wise) association in a three tier HetNet is illustrated in Figure 1 Fundamentally, rate-optimized communication comes down to a large system-level optimization, where decisions like user scheduling and cell association are coupled due to the load and interference in the network. In general, finding the truly optimal user-server association is a combinatorial optimization problem and the complexity grows exponentially with the scale of the network, which is a dead end. We briefly overview a few key technical approaches for load balancing in HetNets. Since a general utility maximization of (load-weighted) rate, subject to a resource or/and power constraint, results in a coupled relationship between the users' association and scheduling, this approach is NP hard and not computable even for modest-sized cellular networks. Dynamic traffic makes the problem even more challenging, leading to a long-standing problem that has been studied extensively in queuing theory, with only marginal progress made, known as the coupled queues problem. One way to make the problem convex is by assuming a fully loaded model (i.e. all BS's always transmitting) and allowing users to associate with multiple BSs, which upper bounds the performance versus a binary association [3] . A basic form is to maximize the utility of loadweighted rate, subject to a resource or/and power constraint, where the binary association indicator is relaxed to a real number between 0 and 1. Following standard optimization tools, namely dual decomposition, a low-complexity distributed algorithm, which converges to a nearoptimal solution, can then be developed. As it can be observed in Figure 2 , there is a large (3.5x) 6 rate gain for "cell-edge" users (bottom 5-10%) and a 2x rate gain for "median" users, compared to a maximum received power based association. Markov Decision Processes (MDPs) provide a framework for studying the sequential optimization of discrete time stochastic systems in the presence of uncertainty. The objective is to perform actions in the current state to maximize the future expected reward. In the context of HetNets, MDPs have been used to study handoff between different radio access technologies 7 provides a possible approach for self-organizing HetNets to combine the benefits of both centralized and distributed design. Game theory, as a discipline allows analysis of interactive decision-making processes, and provides tractable methods for the investigation of very large decentralized optimization problems. For example, a user-centric approach, without requiring any signaling overhead or coordination among different access networks, is analyzed in [6] . Another example is the study of dynamics of network selection in [7] , where users in different service areas compete for bandwidth from different wireless networks. Although game theory is a useful tool, especially for applications in self-organizing/dynamic networks, the convergence of the resulting algorithms is, in general, not guaranteed. Even if the algorithms converge, they do not necessarily provide an optimal solution, which along with large overhead may lead to inefficient utilization. Further, since the main focus of game theory is on strategic decision-making, there is no closed-form expression to characterize the relationship between a performance metric and the network parameters. Thus, although we are not convinced that game theory is the best analysis or design tool for HetNet load balancing, it could provide some insight on how uncoordinated UEs and BSs should associate. Biased received power based user association control is a popular suboptimum technique for proactively offloading users to lower power base stations and is part of 3GPP standardization efforts [8] [9] . In this technique, users are offloaded to smaller cells using an association bias. Formally, if there are K candidate tiers available for a user to associate, then the index of the chosen tier is (1) where B i is the bias for tier i and P rx,i is the received power from tier i. By convention, tier 1 is the macrocell tier and has a bias of 1 (0 dB). For example a small cell bias of 10 dB means a UE would associate with the small cell up until its received power was more than 10 dB less than the A natural question concerns the optimality gap between CRE and the more theoretically grounded solutions previously discussed. It is somewhat surprising and reassuring that a simple per-tier biasing nearly achieves the optimal load-aware performance, if the bias values are chosen carefully [3] (see Fig. 2 ). However, in general, it is difficult to prescribe the optimal biases leveraging optimization techniques. The previous tools and techniques seek to maximize a utility function U for the current network configuration, for which we characterized the gain in average performance as (2) where Ω is the set of solution space. However, alternatively assuming an underlying distribution for the network configuration, another problem can be posed instead as in (3), where the optimization is over the averaged utility. (3) The latter formulation falls under the realm of stochastic optimization, i.e. the involved variables are random. The solution to (3) would certainly be suboptimal for (2) -and already we observed the gap between an optimized but static CRE and the globally optimal solution in the last section -but has the advantage of offering much lower complexity and overhead (both computational and messaging) versus re-optimizing the associations for each network realization. Stochastic geometry as a branch of applied probability can be used for endowing base station and user locations in the network by a point process. By using Poisson point process (PPP) to model user and base station locations, in particular, tractable expressions can be obtained for key metrics like SINR and rate [11] , which then can be used for optimization. This approach also has the benefit of giving insights on the impact of key system-level parameters like transmit powers, densities and bandwidths of different tiers on the design of load balancing algorithms. As an example of the applicability of this framework, cell range expansion has been analyzed using stochastic geometry in [12] by averaging over all the potential network configurations, revealing the effect of important network parameters in a concise form. Modeling base stations as random locations in HetNets makes the precise association region and load distribution intractable. An analytical approximation for the association area was proposed in [12] , which was then used for load distribution (assuming uniform user distribution) and consequently the rate distribution in terms of the per tier bias parameters can be found [12] [13]. The derived rate distribution can then used to find the optimal biases simply by maximizing the biased rate distribution as a function of the bias value. We now explore several design questions that are introduced with load balancing. How much to bias? Can interference management help, how can it be done, and how much is the gain? As small cells will be continually rolled out over time, how (or does) the load balancing change as the small cell density increases? In this section we answer these questions, with the findings summarized in Table 1 . Bias Values. There are two major cases to consider for biasing: co-channel deployments (macro
In Ref. REF , the authors surveyed different schemes to solve the load-balancing problem, such as centralized optimization, the game theory, Markov decision process, and biasing schemes, and also mentioned some of the open challenges.
8861881
An Overview of Load Balancing in HetNets: Old Myths and Open Problems
{ "venue": "ArXiv", "journal": "ArXiv", "mag_field_of_study": [ "Computer Science", "Mathematics" ] }
There has been much interest in the machine learning and algorithmic game theory communities on understanding and using submodular functions. Despite this substantial interest, little is known about their learnability from data. Motivated by applications, such as pricing goods in economics, this paper considers PAC-style learning of submodular functions in a distributional setting. A problem instance consists of a distribution on {0, 1} n and a real-valued function on {0, 1} n that is non-negative, monotone, and submodular. We are given poly(n) samples from this distribution, along with the values of the function at those sample points. The task is to approximate the value of the function to within a multiplicative factor at subsequent sample points drawn from the same distribution, with sufficiently high probability. We develop the first theoretical analysis of this problem, proving a number of important and nearly tight results. For instance, if the underlying distribution is a product distribution then we give a learning algorithm that achieves a constant-factor approximation (under some assumptions). However, for general distributions we provide a surprisingÕ(n 1/3 ) lower bound based on a new interesting class of matroids and we also show a O(n 1/2 ) upper bound. Our work combines central issues in optimization (submodular functions and matroids) with central topics in learning (distributional learning and PAC-style analyses) and with central concepts in pseudo-randomness (lossless expander graphs). Our analysis involves a twist on the usual learning theory models and uncovers some interesting structural and extremal properties of submodular functions, which we suspect are likely to be useful in other contexts. In particular, to prove our general lower bound, we use lossless expanders to construct a new family of matroids which can take wildly varying rank values on superpolynomially many sets; no such construction was previously known. This construction shows unexpected extremal properties of submodular functions.
Prior work on learning submodular functions falls into three categories: submodular function regression REF , maximization of submodular discriminant functions, and minimization of submodular discriminant functions.
2064904
Learning submodular functions
{ "venue": "STOC '11", "journal": null, "mag_field_of_study": [ "Mathematics", "Computer Science" ] }
Abstract-Vehicular Ad hoc Networks (VANET) is one of the most challenging research areas in the field of Mobile Ad Hoc Networks. In this research, we propose a new mechanism for increasing network visibility, by taking the information gained from periodic safety messages (beacons), and inserting it into a 'neighbor' table. The table will be propagated to all neighbors giving a wider vision for each vehicle belonging to the network. It will also decrease the risk of collision at road junctions as each vehicle will have prior knowledge oncoming vehicles before reaching the junction.
In REF , the authors proposed how to increase visibility of the network in aggregation techniques.
9552265
Increasing Network Visibility Using Coded Repetition Beacon Piggybacking
{ "venue": "World Applied Sciences Journal 13 (1); 100-108, 2011, ISSN 1818-4952, \\c{opyright}IDOSI Publications, 2011", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
A Connected Dominating Set (CDS) working as a virtual backbone is an effective way to decrease the overhead of routing in a wireless sensor network. Furthermore, a kConnected m-Dominating Set (kmCDS) is necessary for fault tolerance and routing flexibility. Some approximation algorithms have been proposed to construct a kmCDS. However, most of them only consider some special cases where k = 1, 2 or k ≤ m, or are not easy to implement, or have high message complexity. In this paper, we propose a novel distributed algorithm LDA with low message complexity to construct a kmCDS for general k and m whose size is guaranteed to be within a small constant factor of the optimal solution when the maximum node degree is a constant. We also propose one centralized algorithm ICGA with a constant performance ratio to construct a kmCDS. Theoretical analysis as well as simulation results are shown to evaluate the proposed algorithms.
In REF , the authors proposed a centralized algorithm, ICGA, that has a constant performance ratio and can construct k, m-CDS for general k and m.
636844
Construction algorithms for k-connected m-dominating sets in wireless sensor networks
{ "venue": "MobiHoc '08", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
Abstract. This paper proposes an approach to improve atlas-to-image registration accuracy with large pathologies. Instead of directly registering an atlas to a pathological image, the method learns a mapping from the pathological image to a quasi-normal image, for which more accurate registration is possible. Specifically, the method uses a deep variational convolutional encoder-decoder network to learn the mapping. Furthermore, the method estimates local mapping uncertainty through network inference statistics and uses those estimates to down-weight the image registration similarity measure in areas of high uncertainty. The performance of the method is quantified using synthetic brain tumor images and images from the brain tumor segmentation challenge (BRATS 2015).
REF used a Variational Auto-encoder to learn a mapping from pathological images to quasi-normal (pseudo healthy) images to improve atlas-to-image registration accuracy with large pathologies.
8905349
Registration of pathological images
{ "venue": "Simulation and synthesis in medical imaging : first International Workshop, SASHIMI 2016, held in conjunction with MICCAI 2016, Athens, Greece, October 21, 2016, Proceedings. SASHIMI (Workshop)", "journal": "Simulation and synthesis in medical imaging : first International Workshop, SASHIMI 2016, held in conjunction with MICCAI 2016, Athens, Greece, October 21, 2016, Proceedings. SASHIMI (Workshop)", "mag_field_of_study": [ "Computer Science", "Medicine" ] }
Abstract-In this paper, cognitive routing coupled with spectrum sensing and sharing in a multi-channel multi-hop cognitive radio network (CRN) is investigated. Recognizing the spectrum dynamics in CRN, we propose an opportunistic cognitive routing (OCR) protocol that allows users to exploit the geographic location information and discover the local spectrum access opportunities to improve the transmission performance over each hop. Specifically, based on location information and channel usage statistics, a secondary user (SU) distributedly selects the next hop relay and adapts its transmission to the dynamic spectrum access opportunities in its neighborhood. In addition, we introduce a novel metric, namely, cognitive transport throughput (CTT), to capture the unique properties of CRN and evaluate the potential relay gain of each relay candidate. A heuristic algorithm is proposed to reduce the searching complexity of the optimal selection of channel and relay. Simulation results are given to demonstrate that our proposed OCR well adapts to the spectrum dynamics and outperforms existing routing protocols in CRN. Index Terms-Cognitive radio, multi-hop transmission, opportunistic routing, dynamic spectrum access.
Yongkang Liu et al. REF designed a routing protocol based on geographical location details and channel availability.
2444885
Spectrum-Aware Opportunistic Routing in Multi-Hop Cognitive Radio Networks
{ "venue": "IEEE Journal on Selected Areas in Communications", "journal": "IEEE Journal on Selected Areas in Communications", "mag_field_of_study": [ "Computer Science" ] }
Commercial cloud offerings, such as Amazon's EC2, let users allocate compute resources on demand, charging based on reserved time intervals. While this gives great flexibility to elastic applications, users lack guidance for choosing between multiple offerings, in order to complete their computations within given budget constraints. In this work, we present BaTS, our budget-constrained scheduler. Using a small task sample, BaTS can estimate costs and makespan for a given bag on different cloud offerings. It provides the user with a choice of options before execution and then schedules the bag according to the user's preferences. BaTS requires no a-priori information about task completion times. We evaluate BaTS by emulating different cloud environments on the DAS-3 multicluster system. Our results show that BaTS correctly estimates budget and makespan for the scenarios investigated; the user-selected schedule is then executed within the given budget limitations.
Dealing with Cloud resources, Oprescu et al. REF design a budgetconstraint scheduler for BoTs, estimating the costs and the makespan for various scenarios before executing the user-selected schedule.
1541100
BUDGET ESTIMATION AND CONTROL FOR BAG-OF-TASKS SCHEDULING IN CLOUDS
{ "venue": "Parallel Process. Lett.", "journal": "Parallel Process. Lett.", "mag_field_of_study": [ "Computer Science" ] }
This paper studies a multi-user multiple-input single-output (MISO) downlink system for simultaneous wireless information and power transfer (SWIPT), in which a set of single-antenna mobile stations (MSs) receive information and energy simultaneously via power splitting (PS) from the signal sent by a multi-antenna base station (BS). We aim to minimize the total transmission power at BS by jointly designing transmit beamforming vectors and receive PS ratios for all MSs under their given signal-to-interference-plus-noise ratio (SINR) constraints for information decoding and harvested power constraints for energy harvesting. First, we derive the sufficient and necessary condition for the feasibility of our formulated problem. Next, we solve this non-convex problem by applying the technique of semidefinite relaxation (SDR). We prove that SDR is indeed tight for our problem and thus achieves its global optimum. Finally, we propose two suboptimal solutions of lower complexity than the optimal solution based on the principle of separating the optimization of transmit beamforming and receive PS, where the zero-forcing (ZF) and the SINR-optimal based transmit beamforming schemes are applied, respectively. Simultaneous wireless information and power transfer (SWIPT), broadcast channel, energy harvesting, beamforming, power splitting, semidefinite relaxation.
In REF , a multiuser MISO downlink system for SWIPT is studied.
1767525
Joint Transmit Beamforming and Receive Power Splitting for MISO SWIPT Systems
{ "venue": null, "journal": null, "mag_field_of_study": [ "Mathematics", "Computer Science" ] }
Abstract Micro Aerial Vehicles (MAVs) with perching capabilities can be used to efficiently place sensors in aloft locations. A major challenge for perching is to build a lightweight mechanism that can be easily mounted on a MAV, allowing it to perch (attach and detach on command) to walls of different materials. To date, only very few systems have been proposed that aim at enabling MAVs with perching capabilities. Typically, these solutions either require a delicate dynamic flight maneuver in front of the wall or expose the MAV to very high impact forces when colliding head-first with the wall. In this article, we propose a 4.6g perching mechanism that allows MAVs to perch on walls of natural and man-made materials such as trees and painted concrete facades of buildings. To do this, no control for the MAV is needed other than flying head-first into the wall. The mechanism is designed to translate the impact impulse into a snapping movement that sticks small needles into the surface and uses a small electric motor to detach from the wall and recharge the mechanism for the next perching sequence. Based on this principle, it damps the impact forces that act on the platform to avoid damage of the MAV. We performed 110 sequential perches on a variety of substrates with a success rate of 100%. The main contributions of this article are (i) the evaluation of different designs of perching, (ii) the description and formal modeling of a novel perching mechanism, and (iii) the demonstration and characterization of a functional prototype on a microglider 1 .
One small glider perches on walls with the use of small needles REF .
55910422
A perching mechanism for micro aerial vehicles
{ "venue": null, "journal": "Journal of Micro-Nano Mechatronics", "mag_field_of_study": [ "Materials Science" ] }
Domain adaptation is critical for success in new, unseen environments. Adversarial adaptation models applied in feature spaces discover domain invariant representations, but are difficult to visualize and sometimes fail to capture pixel-level and low-level domain shifts. Recent work has shown that generative adversarial networks combined with cycle-consistency constraints are surprisingly effective at mapping images between domains, even without the use of aligned image pairs. We propose a novel discriminatively-trained Cycle-Consistent Adversarial Domain Adaptation model. CyCADA adapts representations at both the pixel-level and feature-level, enforces cycle-consistency while leveraging a task loss, and does not require aligned pairs. Our model can be applied in a variety of visual recognition and prediction settings. We show new state-of-the-art results across multiple adaptation tasks, including digit classification and semantic segmentation of road scenes demonstrating transfer from synthetic to real world domains.
In particular REF exploits both image and feature-level adaptation in a single-source unsupervised domain adaptation setting.
7646250
CyCADA: Cycle-Consistent Adversarial Domain Adaptation
{ "venue": "ArXiv", "journal": "ArXiv", "mag_field_of_study": [ "Computer Science" ] }
In this work, we present a novel local descriptor for video sequences. The proposed descriptor is based on histograms of oriented 3D spatio-temporal gradients. Our contribution is four-fold. (i) To compute 3D gradients for arbitrary scales, we develop a memory-efficient algorithm based on integral videos. (ii) We propose a generic 3D orientation quantization which is based on regular polyhedrons. (iii) We perform an in-depth evaluation of all descriptor parameters and optimize them for action recognition. (iv) We apply our descriptor to various action datasets (KTH, Weizmann, Hollywood) and show that we outperform the state-of-the-art.
Alexander et al. proposed a HOG-3D descriptor for video sequences, which used histograms of oriented 3D spatio-temporal gradients to characterize action REF .
5607238
A spatio-temporal descriptor based on 3D-gradients
{ "venue": "In BMVC", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
Abstract. We study the tradeoffs involved in the energy-efficient localization and tracking of mobile targets by a wireless sensor network. Our work focuses on building a framework for evaluating the fundamental performance of tracking strategies in which only a small portion of the network is activated at any point in time. We first compare naive network operation with random activation and selective activation. In these strategies the gains in energy-savings come at the expense of increased uncertainty in the location of the target, resulting in reduced quality of tracking. We show that selective activation with a good prediction algorithm is a dominating strategy that can yield orders-of-magnitude energy savings with negligible difference in tracking quality. We then consider duty-cycled activation and show that it offers a flexible and dynamic tradeoff between energy expenditure and tracking error when used in conjunction with selective activation.
al. build a framework to evaluate the tracking strategies in an energy aware context REF .
1827831
Energy-Quality Tradeoffs for Target Tracking in Wireless Sensor Networks
{ "venue": "in International Symposium on Aerospace/Defense sensing Simulation and Controls, Aerosense", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
A novel rough set approach is proposed in this paper to discover classification rules through a process of knowledge induction which selects optimal decision rules with a minimal set of features necessary and sufficient for classification of real-valued data. A rough set knowledge discovery framework is formulated for the analysis of interval-valued information systems converted from real-valued raw decision tables. The optimal feature selection method for information systems with interval-valued features obtains all classification rules hidden in a system through a knowledge induction process. Numerical examples are employed to substantiate the conceptual arguments.
One research discovers classification rules through a knowledge induction process that selects decision rules with a minimal set of features for real-valued data classification REF .
21962046
A rough set approach for the discovery of classification rules in interval-valued information systems
{ "venue": "Int. J. Approx. Reason.", "journal": "Int. J. Approx. Reason.", "mag_field_of_study": [ "Mathematics", "Computer Science" ] }
Writing concurrent programs is a challenge because developers must consider both functional correctness and performance requirements. Numerous program analyses and testing techniques have been proposed to detect functional faults, e.g., caused by incorrect synchronization. However, little work has been done to help developers address performance problems in concurrent programs, e.g., because of inefficient synchronization. This paper presents SyncProf, a concurrency-focused profiling approach that helps in detecting, localizing, and optimizing synchronization bottlenecks. In contrast to traditional profilers, SyncProf repeatedly executes a program with various inputs and summarizes the observed performance behavior. A key novelty is a graph-based representation of relations between critical sections, which is the basis for computing the performance impact of critical sections, for identifying the root cause of a bottleneck, and for suggesting optimization strategies to the developer. We evaluate SyncProf on 19 versions of eight C/C++ projects with both known and previously unknown synchronization bottlenecks. The results show that SyncProf effectively localizes the root causes of these bottlenecks with higher precision than a state of the art lock contention profiler and that it suggests valuable strategies to avoid the bottlenecks. •Software and its engineering → Software notations and tools;
SyncProf REF utilizes Pin to detect, localize, and optimize the synchronization performance problems.
15969517
SyncProf: detecting, localizing, and optimizing synchronization bottlenecks
{ "venue": "ISSTA 2016", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
A common form of sarcasm on Twitter consists of a positive sentiment contrasted with a negative situation. For example, many sarcastic tweets include a positive sentiment, such as "love" or "enjoy", followed by an expression that describes an undesirable activity or state (e.g., "taking exams" or "being ignored"). We have developed a sarcasm recognizer to identify this type of sarcasm in tweets. We present a novel bootstrapping algorithm that automatically learns lists of positive sentiment phrases and negative situation phrases from sarcastic tweets. We show that identifying contrasting contexts using the phrases learned through bootstrapping yields improved recall for sarcasm recognition.
REF state that sarcasm is a contrast between positive sentiment word and a negative situation.
10168779
Sarcasm as Contrast between a Positive Sentiment and Negative Situation
{ "venue": "EMNLP", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
Background: Research into event-based text mining from the biomedical literature has been growing in popularity to facilitate the development of advanced biomedical text mining systems. Such technology permits advanced search, which goes beyond document or sentence-based retrieval. However, existing event-based systems typically ignore additional information within the textual context of events that can determine, amongst other things, whether an event represents a fact, hypothesis, experimental result or analysis of results, whether it describes new or previously reported knowledge, and whether it is speculated or negated. We refer to such contextual information as meta-knowledge. The automatic recognition of such information can permit the training of systems allowing finer-grained searching of events according to the meta-knowledge that is associated with them. Results: Based on a corpus of 1,000 MEDLINE abstracts, fully manually annotated with both events and associated meta-knowledge, we have constructed a machine learning-based system that automatically assigns meta-knowledge information to events. This system has been integrated into EventMine, a state-of-the-art event extraction system, in order to create a more advanced system (EventMine-MK) that not only extracts events from text automatically, but also assigns five different types of meta-knowledge to these events. The meta-knowledge assignment module of EventMine-MK performs with macro-averaged F-scores in the range of 57-87% on the BioNLP'09 Shared Task corpus. EventMine-MK has been evaluated on the BioNLP'09 Shared Task subtask of detecting negated and speculated events. Our results show that EventMine-MK can outperform other state-of-the-art systems that participated in this task. We have constructed the first practical system that extracts both events and associated, detailed meta-knowledge information from biomedical literature. The automatically assigned meta-knowledge information can be used to refine search systems, in order to provide an extra search layer beyond entities and assertions, dealing with phenomena such as rhetorical intent, speculations, contradictions and negations. This finer grained search functionality can assist in several important tasks, e.g., database curation (by locating new experimental knowledge) and pathway enrichment (by providing information for inference). To allow easy integration into text mining systems, EventMine-MK is provided as a UIMA component that can be used in the interoperable text mining infrastructure, U-Compare. Biomedical text mining [1-3] has focussed largely on recognising relevant biomedical entities and binary relations between these entities (e.g., protein-protein interactions [4, 5] , gene-disease associations [6, 7] , etc.). However, the extraction of biomedical events from the literature Full list of author information is available at the end of the article has been a recent focus of research into biomedical natural language processing, since events are crucial for understanding biomedical processes and functions [3] . Events constitute structured representations of biomedical knowledge. They are usually organised around verbs (e.g., activate, inhibit) or nominalised verbs (e.g., expression), which we call trigger expressions. Events have arguments, which contribute towards the description of the event. These arguments, which can either be entities (e.g., p53) or other events, are often assigned semantic roles, which characterise the contribution of the argument to
REF use a machine learning-based approach to assign metaknowledge categories to events.
827074
Extracting semantically enriched events from biomedical literature
{ "venue": "BMC Bioinformatics", "journal": "BMC Bioinformatics", "mag_field_of_study": [ "Medicine", "Computer Science" ] }
Abstract-Electronic Learning has been one of the foremost trends in education so far. Such importance draws the attention to an important shift in the educational paradigm. Due to the complexity of the evolving paradigm, the prospective dynamics of learning require an evolution of knowledge delivery and evaluation. This research work tries to put in hand a futuristic design of an autonomous and intelligent e-Learning system. In which machine learning and user activity analysis play the role of an automatic evaluator for the knowledge level. It is important to assess the knowledge level in order to adapt content presentation and to have more realistic evaluation of online learners. Several classification algorithms are applied to predict the knowledge level of the learners and the corresponding results are reported. Furthermore, this research proposes a modern design of a dynamic learning environment that goes along the most recent trends in e-Learning. The experimental results illustrate an overall performance superiority of a support vector machine model in evaluating the knowledge levels; having 98.6% of correctly classified instances with 0.0069 mean absolute error.
NazeehGhatasheh: REF This research introduced a number of enhancements to dynamic ELearning systems in terms of knowledge transmission and evaluation.
16556027
Knowledge Level Assessment in e-Learning Systems Using Machine Learning and User Activity Analysis
{ "venue": null, "journal": "International Journal of Advanced Computer Science and Applications", "mag_field_of_study": [ "Computer Science" ] }
Limiting the overhead of frequent events on the control plane is essential for realizing a scalable Software-Defined Network. One way of limiting this overhead is to process frequent events in the data plane. This requires modifying switches and comes at the cost of visibility in the control plane. Taking an alternative route, we propose Kandoo, a framework for preserving scalability without changing switches. Kandoo has two layers of controllers: (i) the bottom layer is a group of controllers with no interconnection, and no knowledge of the network-wide state, and (ii) the top layer is a logically centralized controller that maintains the network-wide state. Controllers at the bottom layer run only local control applications (i.e., applications that can function using the state of a single switch) near datapaths. These controllers handle most of the frequent events and effectively shield the top layer. Kandoo's design enables network operators to replicate local controllers on demand and relieve the load on the top layer, which is the only potential bottleneck in terms of scalability. Our evaluations show that a network controlled by Kandoo has an order of magnitude lower control channel consumption compared to normal OpenFlow networks.
On the contrary, Kandoo REF proposes a hierarchical distribution of the controllers based on two layers of controllers: (i) the bottom layer, a group of controllers with no interconnection, and no knowledge of the network-wide state, and (ii) the top layer, a logically centralized controller that maintains the network-wide state.
193153
Kandoo: a framework for efficient and scalable offloading of control applications
{ "venue": "HotSDN '12", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
Abstract-Migrating computational intensive tasks from mobile devices to more resourceful cloud servers is a promising technique to increase the computational capacity of mobile devices while saving their battery energy. In this paper, we consider a MIMO multicell system where multiple mobile users (MUs) ask for computation offloading to a common cloud server. We formulate the offloading problem as the joint optimization of the radio resources−the transmit precoding matrices of the MUs−and the computational resources−the CPU cycles/second assigned by the cloud to each MU−in order to minimize the overall users' energy consumption, while meeting latency constraints. The resulting optimization problem is nonconvex (in the objective function and constraints). Nevertheless, in the single-user case, we are able to express the global optimal solution in closed form. In the more challenging multiuser scenario, we propose an iterative algorithm, based on a novel successive convex approximation technique, converging to a local optimal solution of the original nonconvex problem. Then, we reformulate the algorithm in a distributed and parallel implementation across the radio access points, requiring only a limited coordination/signaling with the cloud. Numerical results show that the proposed schemes outperform disjoint optimization algorithms.
For the investigations with decentralization approaches, Sardellitti et al. formulated the offloading problem as a joint optimization of the radio resources and the computational resources, and provided a distributed resource scheduling algorithm using successive convex approximation technique to minimize the overall users' energy consumption, while meeting latency constraints REF .
13245153
Joint Optimization of Radio and Computational Resources for Multicell Mobile-Edge Computing
{ "venue": "2014 IEEE 15th International Workshop on Signal Processing Advances in Wireless Communications (SPAWC)", "journal": "2014 IEEE 15th International Workshop on Signal Processing Advances in Wireless Communications (SPAWC)", "mag_field_of_study": [ "Computer Science", "Mathematics" ] }
Abstract. The paper presents proposed Security Architecture for Open Collaborative Environment (OCE) being developed in the framework of the Collaboratory.nl (CNL) project with the intent to build a flexible, customer-driven security infrastructure for open collaborative applications. The architecture is based on extended use of emerging Web Services and Grid security technologies combined with concepts from the generic Authentication Authorization and Accounting (AAA) and Role-based Access Control (RBAC) frameworks. The paper describes another proposed solution the Job-centric security model that uses a Job description as a semantic document created on the basis of the signed order (or business agreement) to provide a job-specific context for invocation of the basic OCE security services. Typical OCE use case of policy based access control is discussed in details.
REF propose a flexible customer-driven security infrastructure for Open Collaborative Environment (OCE) to build a flexible, customer-driven security infrastructure for open collaborative applications.
15654879
Security Architecture for Open Collaborative Environment
{ "venue": "EGC", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
Existing graph-based ranking methods for keyphrase extraction compute a single importance score for each word via a single random walk. Motivated by the fact that both documents and words can be represented by a mixture of semantic topics, we propose to decompose traditional random walk into multiple random walks specific to various topics. We thus build a Topical PageRank (TPR) on word graph to measure word importance with respect to different topics. After that, given the topic distribution of the document, we further calculate the ranking scores of words and extract the top ranked ones as keyphrases. Experimental results show that TPR outperforms state-of-the-art keyphrase extraction methods on two datasets under various evaluation metrics.
In REF , Liu et al. decomposed the traditional PageRank into multiple random walks specific to various topics for keyphrase extraction.
9506420
Automatic Keyphrase Extraction via Topic Decomposition
{ "venue": "EMNLP", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
A critical aspect of applications with wireless sensor networks is network lifetime. Battery-powered sensors are usable as long as they can communicate captured data to a processing node. Sensing and communications consume energy, therefore judicious power management and scheduling can effectively extend operational time. To monitor a set of targets with known locations when ground access in the monitored area is prohibited, one solution is to deploy the sensors remotely, from an aircraft. The loss of precise sensor placement would then be compensated by a large sensor population density in the drop zone, that would improve the probability of target coverage. The data collected from the sensors is sent to a central node for processing. In this paper we propose an efficient method to extend the sensor network operational time by organizing the sensors into a maximal number of disjoint set covers that are activated successively. Only the sensors from the current active set are responsible for monitoring all targets and for transmitting the collected data, while nodes from all other sets are in a low-energy sleep mode. In this paper we address the maximum disjoint set covers problem and we design a heuristic that computes the sets. Theoretical analysis and performance evaluation results are presented to verify our approach.
The general target coverage problem is introduced in REF , where the problem is modelled as finding maximal number of disjoint set covers, such that every cover completely monitors all targets.
8022422
Improving wireless sensor network lifetime through power aware organization
{ "venue": "ACM Wireless Networks", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
This document describes version 0.4.0 of librosa: a Python package for audio and music signal processing. At a high level, librosa provides implementations of a variety of common functions used throughout the field of music information retrieval. In this document, a brief overview of the library's functionality is provided, along with explanations of the design goals, software development practices, and notational conventions.
For example, librosa REF is a Python package often used for audio and signal processing.
33504
librosa: Audio and Music Signal Analysis in Python
{ "venue": "Proceedings of the 14th Python in Science Conference", "journal": "Proceedings of the 14th Python in Science Conference", "mag_field_of_study": [ "Computer Science" ] }
In this work, we introduce a convolutional neural network model, ConvE, for the task of link prediction. ConvE applies 2D convolution directly on embeddings, thus inducing spatial structure in embedding space. To scale to large knowledge graphs and prevent overfitting due to over-parametrization, previous work seeks to reduce parameters by performing simple transformations in embedding space. We take inspiration from computer vision, where convolution is able to learn multiple layers of non-linear features while reducing the number of parameters through weight sharing. Applied naively, convolutional models for link prediction are computationally costly. However, by predicting all links simultaneously we improve test time performance by more than 300x on FB15k. We report stateof-the-art results for numerous previously introduced link prediction benchmarks, including the well-established FB15k and WN18 datasets. Previous work noted that these two datasets contain many reversible triples, but the severity of this issue was not quantified. To investigate this, we design a simple model that uses a single rule which reverses relations and achieves state-of-the-art results. We introduce WN18RR, a subset of WN18 which was constructed the same way as the previously proposed FB15k-237, to alleviate this problem and report results for our own and previously proposed models for all datasets. Analysis of our convolutional model suggests that it is particularly good at modelling nodes with high indegree and nodes with high PageRank and that 2D convolution applied on embeddings seems to induce contrasting pixel-level structures.
REF introduced ConvE that uses 2D convolution over embeddings and multiple layers of nonlinear features to model knowledge graphs.
4328400
Convolutional 2D Knowledge Graph Embeddings
{ "venue": "ArXiv", "journal": "ArXiv", "mag_field_of_study": [ "Computer Science", "Mathematics" ] }
Abstract In this paper we present an adaptive video transmission framework that integrates rate allocation and buffer control at the source with the playback adjustment mechanism at the receiver. A transmission rate is determined by a rate allocation algorithm which uses the program clock reference (PCR) embedded in the video streams to regulate the transmission rate in a refined way. The server side also maintains multiple buffers for packets of different importance levels to trade off random loss for controlled loss according to the source buffer size, the visual impact, and the playback deadline. An over-boundary playback adjustment mechanism based on proportional-integra (PI) controller is adopted at the receiver to maximize the visual quality of the displayed video according to the overall loss and the receiver buffer occupancy. The performance of our proposed framework is evaluated in terms of peak signal-to-noise ratio (PSNR) in the simulations, and the simulation results demonstrate the improvement of the average PSNR values as well as the better quality of the decoded frames. Key words Variable-bit-rate (VBR), video streaming, program clock reference (PCR), buffer management, proportional-integra controller In order to obtain better visual quality, videos are required to use variable-bit-rate (VBR) encoding. However, it is more difficult to manage the VBR video traffic because of its significant bit-rate burstiness [1] . Normally, transmission of video requires high bandwidth and low delay. Many researches have been done on VBR compressed video transmission [2−9] . In [2], the problem of streaming packetized media in a rate-distortion optimized way was addressed. An interactive descent algorithm was used to minimize the average end-to-end distortion. However, the high computational complexity of this approach made it less appealing during real-time streaming, where the server must adapt to bandwidth variations very quickly. Adaptive media playout (AMP) was proposed from the receiver point of view in [3] to vary the playout rate of media frames according to the buffer occupancy as soon as the target buffer level is reached, which may cause jitter at the critical point of two adjacent buffer levels. A multi-buffer scheduling scheme was proposed in [4] to schedule the transmission based on the source buffer priority. A proportional-integralderivative (PID) controller was adopted in [5] to have better tradeoff between spatial and temporal qualities. The above two schemes belong to server-side technologies, which only consider the sender buffer state without taking into account the end-to-end delay constraint of multimedia applications. [6] addressed the problem of optimizing the playback delay experienced by a population of heterogeneous clients and proposed a server-based scheduling strategy that targets a fair distribution of the playback delays. [7] modeled the streaming system as a queuing system. An optimal substream was selected based on the decoding failure probability of the frame and the effective network bandwidth. [8] proposed a reverse frame selection (RFS) scheme based on dynamic programming to solve the problem of video streaming over VBR channels. [9] presented a streaming framework centered around the concept of priority drop. It combined the scalable compression and adaptive streaming to provide a graceful degradation of the quality. Most of the previous approaches focused on regulating transmission rate through the observation of network sta-
In REF , a rate allocation algorithm, performed at the Group of Picture (GoP) level, is performed at the sender to maximize the visual quality according to the overall loss and the receiver buffer occupancy.
14290765
Joint Rate Allocation and Buffer Management for Robust Transmission of VBR Video
{ "venue": "ICMCS", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
Abstract: Wireless Sensor Networks (WSNs), in recent times, have become one of the most promising network solutions with a wide variety of applications in the areas of agriculture, environment, healthcare and the military. Notwithstanding these promising applications, sensor nodes in WSNs are vulnerable to different security attacks due to their deployment in hostile and unattended areas and their resource constraints. One of such attacks is the DoS jamming attack that interferes and disrupts the normal functions of sensor nodes in a WSN by emitting radio frequency signals to jam legitimate signals to cause a denial of service. In this work we propose a step-wise approach using a statistical process control technique to detect these attacks. We deploy an exponentially weighted moving average (EWMA) to detect anomalous changes in the intensity of a jamming attack event by using the packet inter-arrival feature of the received packets from the sensor nodes. Results obtained from a trace-driven simulation show that the proposed solution can efficiently and accurately detect jamming attacks in WSNs with little or no overhead.
Results obtained from a trace-driven simulation show that the proposed solution can efficiently and accurately detect jamming attacks in WSNs with little or no overhead REF .
44172531
A Statistical Approach to Detect Jamming Attacks in Wireless Sensor Networks
{ "venue": "Sensors (Basel, Switzerland)", "journal": "Sensors (Basel, Switzerland)", "mag_field_of_study": [ "Engineering", "Computer Science", "Medicine" ] }
In this paper, the design, implementation and testing of a digital microphone array is presented. The array uses digital MEMS microphones which integrate the microphone, amplifier and analogue to digital converter on a single chip in place of the analogue microphones and external audio interfaces currently used. The device has the potential to be smaller, cheaper and more flexible than typical analogue arrays, however the effect on speech recognition performance of using digital microphones is as yet unknown. In order to evaluate the effect, an analogue array and the new digital array are used to simultaneously record test data for a speech recognition experiment. Initial results employing no adaptation show that performance using the digital array is significantly worse (14% absolute WER) than the analogue device. Subsequent experiments using MLLR and CMLLR channel adaptation reduce this gap, and employing MLLR for both channel and speaker adaptation reduces the difference between the arrays to 4.5% absolute WER.
The authors in REF describe the design on an FPGA of an eight-element digital MEMS microphone array for distant speech recognition.
9663870
A digital microphone array for distant speech recognition
{ "venue": "2010 IEEE International Conference on Acoustics, Speech and Signal Processing", "journal": "2010 IEEE International Conference on Acoustics, Speech and Signal Processing", "mag_field_of_study": [ "Computer Science" ] }
Abstract-This paper presents an FPGA emulation-based fast Network on Chip (NoC) prototyping framework, called Dynamic Reconfigurable NoC (DRNoC) Emulation Platform. The main, distinguishing, characteristic of this approach is that design exploration does not requires re-synthesis, accelerating the process. For this aim, partial reconfiguration capabilities of some state of the art FPGAs have been developed and applied. The paper describes all the building elements of the proposed solution: the used partial reconfiguration approach, the design space exploration framework itself, and the data measuring system. Results and a use case are shown.
An FPGA emulation-based NoC prototyping framework is presented, where the main goal is to speed up the synthesis process by partial reconfiguration of hard cores REF .
18253432
A Fast Emulation-Based NoC Prototyping Framework
{ "venue": "2008 International Conference on Reconfigurable Computing and FPGAs", "journal": "2008 International Conference on Reconfigurable Computing and FPGAs", "mag_field_of_study": [ "Computer Science" ] }
We examine the performance profile of Convolutional Neural Network (CNN) training on the current generation of NVIDIA Graphics Processing Units (GPUs). We introduce two new Fast Fourier Transform convolution implementations: one based on NVIDIA's cuFFT library, and another based on a Facebook authored FFT implementation, fbfft, that provides significant speedups over cuFFT (over 1.5×) for whole CNNs. Both of these convolution implementations are available in open source, and are faster than NVIDIA's cuDNN implementation for many common convolutional layers (up to 23.5× for a synthetic kernel configuration). We discuss different performance regimes of convolutions, comparing areas where straightforward time domain convolutions outperform Fourier frequency domain convolutions. Details on algorithmic applications of NVIDIA GPU hardware specifics in the implementation of fbfft are also provided.
Additionally, Vasilache et al. REF introduce two new FFT-based implementations for more significant speedups.
15193948
Fast Convolutional Nets With fbfft: A GPU Performance Evaluation
{ "venue": "ICLR 2015", "journal": "arXiv: Learning", "mag_field_of_study": [ "Computer Science", "Mathematics" ] }
In the reordering buffer management problem (RBM) a sequence of n colored items enters a buffer with limited capacity k. When the buffer is full, one item is removed to the output sequence, making room for the next input item. This step is repeated until the input sequence is exhausted and the buffer is empty. The objective is to find a sequence of removals that minimizes the total number of color changes in the output sequence. The problem formalizes numerous applications in computer and production systems, and is known to be NP-hard. We give the first constant factor approximation guarantee for RBM. Our algorithm is based on an intricate "rounding" of the solution to an LP relaxation for RBM, so it also establishes a constant upper bound on the integrality gap of this relaxation. Our results improve upon the best previous bound of O( √ log k) of Adamaszek et al. (STOC 2011) that used different methods and gave an online algorithm. Our constant factor approximation beats the super-constant lower bounds on the competitive ratio given by Adamaszek et al. This is the first demonstration of an offline algorithm for RBM that is provably better than any online algorithm.
The offline algorithm by Avigdor-Elgrabli and Rabani REF is based on an intricate rounding of a solution of an LP relaxation of the problem.
5855942
A Constant Factor Approximation Algorithm for Reordering Buffer Management
{ "venue": "ArXiv", "journal": "ArXiv", "mag_field_of_study": [ "Mathematics", "Computer Science" ] }
In real-world applications of visual recognition, many factors-such as pose, illumination, or image quality-can cause a significant mismatch between the source domain on which classifiers are trained and the target domain to which those classifiers are applied. As such, the classifiers often perform poorly on the target domain. Domain adaptation techniques aim to correct the mismatch. Existing approaches have concentrated on learning feature representations that are invariant across domains, and they often do not directly exploit low-dimensional structures that are intrinsic to many vision datasets. In this paper, we propose a new kernel-based method that takes advantage of such structures. Our geodesic flow kernel models domain shift by integrating an infinite number of subspaces that characterize changes in geometric and statistical properties from the source to the target domain. Our approach is computationally advantageous, automatically inferring important algorithmic parameters without requiring extensive crossvalidation or labeled data from either domain. We also introduce a metric that reliably measures the adaptability between a pair of source and target domains. For a given target domain and several source domains, the metric can be used to automatically select the optimal source domain to adapt and avoid less desirable ones. Empirical studies on standard datasets demonstrate the advantages of our approach over competing methods.
GFK (Geodesic Flow Kernel), proposed by Gong et al. REF , is a kernel-based method that considers an infinite number of subspaces and models marginal and distributional shifts between domains.
6742009
Geodesic flow kernel for unsupervised domain adaptation
{ "venue": "2012 IEEE Conference on Computer Vision and Pattern Recognition", "journal": "2012 IEEE Conference on Computer Vision and Pattern Recognition", "mag_field_of_study": [ "Computer Science" ] }
Abstract Random forests is currently one of the most used machine learning algorithms in the non-streaming (batch) setting. This preference is attributable to its high learning performance and low demands with respect to input preparation and hyper-parameter tuning. However, in the challenging context of evolving data streams, there is no random forests algorithm that can be considered state-of-the-art in comparison to bagging and boosting based algorithms. In this work, we present the adaptive random forest (ARF) algorithm for classification of evolving data streams. In contrast to previous attempts of replicating random forests for data stream learning, ARF includes an effective resampling method and adaptive operators that can cope with different types of concept drifts without complex optimizations for different data sets. We present experiments with a parallel implementation of ARF which has no degradation in terms of classification performance in comparison to a serial implementation, since trees and adaptive operators are independent from one another. Finally, we compare ARF with state-of-the-art algorithms in a traditional test-then-train evaluation and a novel delayed labelling evaluation, and show that ARF is accurate and uses a feasible amount of resources.
The Adaptive Random Forest (ARF) algorithm proposes better resampling methods for updating classifiers over drifting data streams REF .
21671230
Adaptive random forests for evolving data stream classification
{ "venue": "Machine Learning", "journal": "Machine Learning", "mag_field_of_study": [ "Computer Science" ] }
In this paper we introduce a large-scale hand pose dataset, collected using a novel capture method. Existing datasets are either generated synthetically or captured using depth sensors: synthetic datasets exhibit a certain level of appearance difference from real depth images, and real datasets are limited in quantity and coverage, mainly due to the difficulty to annotate them. We propose a tracking system with six 6D magnetic sensors and inverse kinematics to automatically obtain 21-joints hand pose annotations of depth maps captured with minimal restriction on the range of motion. The capture protocol aims to fully cover the natural hand pose space. As shown in embedding plots, the new dataset exhibits a significantly wider and denser range of hand poses compared to existing benchmarks. Current state-of-the-art methods are evaluated on the dataset, and we demonstrate significant improvements in cross-benchmark performance. We also show significant improvements in egocentric hand pose estimation with a CNN trained on the new dataset.
The BigHand2.2M benchmark dataset REF is a large dataset which uses 6D magnetic sensors and inverse kinematics to automatically obtain 21 joints hand pose annotations on depth maps.
4662035
BigHand2.2M Benchmark: Hand Pose Dataset and State of the Art Analysis
{ "venue": "2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)", "journal": "2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)", "mag_field_of_study": [ "Computer Science" ] }
In order to keep up with the demand of curating the deluge of crowd-sourced content, social media platforms leverage user interaction feedback to make decisions about which content to display, highlight, and hide. User interactions such as likes, votes, clicks, and views are assumed to be a proxy of a content's quality, popularity, or news-worthiness. In this paper we ask: how predictable are the interactions of a user on social media? To answer this question we recorded the clicking, browsing, and voting behavior of 186 Reddit users over a year. We present interesting descriptive statistics about their combined 339,270 interactions, and we find that relatively simple models are able to predict users' individual browse-or vote-interactions with reasonable accuracy.
While user interactions (likes, votes, clicks, and views) serve as a proxy for the content's quality, popularity, or news-worthiness, predicting user behavior was found to be quite easy REF .
649910
Predicting User-Interactions on Reddit
{ "venue": "ASONAM '17", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
In the atomic snapshot system model, the processes of an asynchronous distributed system communicate by atomic write and atomic snapshot read operations on a shared memory consisting of single-writer, multiple-reader registers. The processes may fail by crashing. It is shown that in this model, a wait-free, full-information protocol complex is homotopy equivalent to the underlying input complex. A span in the sense of Herlihy and Shavit provides the homotopy equivalence. It follows that the protocol complex and the input complex are indistinguishable by ordinary homology or homotopy groups.
An AS protocol complex is not generally homeomorphic to the underlying input complex, but it is homotopy equivalent to it REF .
17249059
A Note on the Homotopy Type of Wait-free Atomic Snapshot Protocol Complexes
{ "venue": null, "journal": null, "mag_field_of_study": [ "Mathematics", "Computer Science" ] }
Abstract: This paper discusses a buying and selling timing prediction system for stocks on the Tokyo Stock Exchange and analysis of intemal representation. It is based on modular neural networks [l][2]. We developed a number of learning algorithms and prediction methods for the TOPIX(Toky0 Stock Exchange Prices Indexes) prediction system. The prediction system achieved accurate predictions and the simulation on stocks tradmg showed an excellent profit. The prediction system was developed by Fujitsu and Nikko Securities.
In one of the earliest studies, REF used several learning algorithms and prediction methods for the Tokyo stock exchange prices index (TOPIX) prediction system.
7078470
Stock market prediction system with modular neural networks
{ "venue": "1990 IJCNN International Joint Conference on Neural Networks", "journal": "1990 IJCNN International Joint Conference on Neural Networks", "mag_field_of_study": [ "Computer Science" ] }
Social networking websites allow users to create and share content. Big information cascades of post resharing can form as users of these sites reshare others' posts with their friends and followers. One of the central challenges in understanding such cascading behaviors is in forecasting information outbreaks, where a single post becomes widely popular by being reshared by many users. In this paper, we focus on predicting the final number of reshares of a given post. We build on the theory of self-exciting point processes to develop a statistical model that allows us to make accurate predictions. Our model requires no training or expensive feature engineering. It results in a simple and efficiently computable formula that allows us to answer questions, in real-time, such as: Given a post's resharing history so far, what is our current estimate of its final number of reshares? Is the post resharing cascade past the initial stage of explosive growth? And, which posts will be the most reshared in the future? We validate our model using one month of complete Twitter data and demonstrate a strong improvement in predictive accuracy over existing approaches. Our model gives only 15% relative error in predicting final size of an average information cascade after observing it for just one hour.
Our approach to identify the period of maximum growth and start of the inhibition region in a cascade life based on Hawkes process is performed along the line of work introduced in REF where the authors use Hawkes point process model to predict the final number of reshares of a post.
6181286
SEISMIC: A Self-Exciting Point Process Model for Predicting Tweet Popularity
{ "venue": "KDD '15", "journal": null, "mag_field_of_study": [ "Computer Science", "Physics", "Mathematics" ] }
Abstract-State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet [1] and Fast R-CNN [2] have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position. The RPN is trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. We further merge RPN and Fast R-CNN into a single network by sharing their convolutional features-using the recently popular terminology of neural networks with 'attention' mechanisms, the RPN component tells the unified network where to look. For the very deep VGG-16 model [3], our detection system has a frame rate of 5 fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007, and MS COCO datasets with only 300 proposals per image. In ILSVRC and COCO 2015 competitions, Faster R-CNN and RPN are the foundations of the 1st-place winning entries in several tracks. Code has been made publicly available. Region proposal methods typically rely on inexpensive features and economical inference schemes. Selective Search [4] , one of the most popular methods, greedily merges superpixels based on engineered low-level features. Yet when compared to efficient detection networks [2], Selective Search is an order of magnitude slower, at 2 seconds per image in a CPU implementation. EdgeBoxes [6] currently provides the best tradeoff between proposal quality and speed, at 0.2 seconds per image. Nevertheless, the region proposal step still consumes as much running time as the detection network. One may note that fast region-based CNNs take advantage of GPUs, while the region proposal methods used in research are implemented on the CPU, making such runtime comparisons inequitable. An obvious way to accelerate proposal computation is to re-implement it for the GPU. This may be an effective engineering solution, but re-implementation ignores the down-stream detection network and therefore misses important opportunities for sharing computation. In this paper, we show that an algorithmic change-computing proposals with a deep convolutional neural network-leads to an elegant and effective solution where proposal computation is nearly cost-free given the detection network's computation. To this end, we introduce novel Region Proposal Networks (RPNs) that share convolutional layers with state-of-the-art object detection networks [1], [2] . By sharing convolutions at test-time, the marginal cost for computing proposals is small (e.g., 10 ms per image). Our observation is that the convolutional feature maps used by region-based detectors, like Fast R-CNN, can also be used for generating region proposals. On top of these convolutional features, we construct an RPN by adding a few additional convolutional layers that simultaneously regress region bounds and objectness scores at each location on a regular grid. The RPN is thus a kind of fully convolutional network (FCN) [7] and can be trained end-to-end specifically for the task for generating detection proposals. RPNs are designed to efficiently predict region proposals with a wide range of scales and aspect ratios. In contrast to prevalent methods [1], [2], [8], [9] that use pyramids of images (Fig. 1a) or pyramids of filters (Fig. 1b) , we introduce novel "anchor" boxes that serve as references at multiple scales and aspect ratios. Our scheme can be thought of as a pyramid of regression references (Fig. 1c) , which avoids enumerating images or filters of multiple scales or aspect ratios. This model performs well when trained and tested using single-scale images and thus benefits running speed. To unify RPNs with Fast R-CNN [2] object detection networks, we propose a training scheme that alternates S. Ren is with
In the Faster R-CNN pipeline REF , the bounding box proposals were generated by a Region Proposal Network (RPN), and the overall framework can thus be trained in an end-to-end manner.
10328909
Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks
{ "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "mag_field_of_study": [ "Computer Science", "Medicine" ] }
Abstract-We present a class of massively parallel processor architectures called invasive tightly coupled processor arrays (TCPAs). The presented processor class is a highly parameterizable template, which can be tailored before runtime to fulfill costumers' requirements such as performance, area cost, and energy efficiency. These programmable accelerators are well suited for domain-specific computing from the areas of signal, image, and video processing as well as other streaming processing applications. To overcome future scaling issues (e. g., power consumption, reliability, resource management, as well as application parallelization and mapping), TCPAs are inherently designed in a way to support self-adaptivity and resource awareness at hardware level. Here, we follow a recently introduced resourceaware parallel computing paradigm called invasive computing where an application can dynamically claim, execute, and release resources. Furthermore, we show how invasive computing can be used as an enabler for power management. Finally, we will introduce ideas on how to realize fault-tolerant loop execution on such massively parallel architectures through employing ondemand spatial redundancies at the processor array level.
In REF , the authors present massively parallel programmable accelerators.
10590448
Massively Parallel Processor Architectures for Resource-aware Computing
{ "venue": "ArXiv", "journal": "ArXiv", "mag_field_of_study": [ "Computer Science" ] }
We study the problem of semi-supervised question answering--utilizing unlabeled text to boost the performance of question answering models. We propose a novel training framework, the Generative Domain-Adaptive Nets. In this framework, we train a generative model to generate questions based on the unlabeled text, and combine model-generated questions with human-generated questions for training question answering models. We develop novel domain adaptation algorithms, based on reinforcement learning, to alleviate the discrepancy between the modelgenerated data distribution and the humangenerated data distribution. Experiments show that our proposed framework obtains substantial improvement from unlabeled text.
REF ) adopt a seq2seq model to generate questions based on paragraphs and answers into their generative adversarial framework.
15164488
Semi-Supervised QA with Generative Domain-Adaptive Nets
{ "venue": "ArXiv", "journal": "ArXiv", "mag_field_of_study": [ "Computer Science" ] }
Abstract. This paper presents a lightweight method for unsupervised extraction of paraphrases from arbitrary textual Web documents. The method differs from previous approaches to paraphrase acquisition in that 1) it removes the assumptions on the quality of the input data, by using inherently noisy, unreliable Web documents rather than clean, trustworthy, properly formatted documents; and 2) it does not require any explicit clue indicating which documents are likely to encode parallel paraphrases, as they report on the same events or describe the same stories. Large sets of paraphrases are collected through exhaustive pairwise alignment of small needles, i.e., sentence fragments, across a haystack of Web document sentences. The paper describes experiments on a set of about one billion Web documents, and evaluates the extracted paraphrases in a natural-language Web search application.
REF extracted sentence fragments occurring in identical contexts as paraphrases from one billion web documents.
8620961
Aligning Needles in a Haystack: Paraphrase Acquisition Across the Web
{ "venue": "Second International Joint Conference on Natural Language Processing: Full Papers", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
Abstract This paper introduces cost curves, a graphical technique for visualizing the performance (error rate or expected cost) of 2-class classifiers over the full range of possible class distributions and misclassification costs. Cost curves are shown to be superior to ROC curves for visualizing classifier performance for most purposes. This is because they visually support several crucial types of performance assessment that cannot be done easily with ROC curves, such as showing confidence intervals on a classifier's performance, and visualizing the statistical significance of the difference in performance of two classifiers. A software tool supporting all the cost curve analysis described in this paper is available from the authors.
Drummond and Holte REF proposed a visualization technique called "Cost Curve", which is able to take into account of cost terms for showing confidence intervals on classifier's performance.
9919123
Cost curves: An improved method for visualizing classifier performance
{ "venue": "Machine Learning", "journal": "Machine Learning", "mag_field_of_study": [ "Computer Science" ] }
The evolving technology of computer auto-fabrication ("3-D printing") now makes it possible to produce physical models for complex biological molecules and assemblies. We report on an application that demonstrates the use of auto-fabricated tangible models and augmented reality for research and education in molecular biology, and for enhancing the scientific environment for collaboration and exploration. We have adapted an augmented reality system to allows virtual 3-D representations (generated by the Python Molecular Viewer) to be overlaid onto a tangible molecular model. Users can easily change the overlaid information, switching between different representations of the molecule, displays of molecular properties such as electrostatics, or dynamic information. The physical model provides a powerful, intuitive interface for manipulating the computer models, streamlining the interface between human intent, the physical model, and the computational activity. With the prevalence of structural and genomic data, molecular biology has become a human-guided, computer-assisted endeavor. The computer assists the essential human function in two ways: in exploration of scientific data, searching for and testing scientific hypotheses; and in collaboration between two or more scientists, to share knowledge and expertise. As databases grow, as our structure and process models become more complex, and as software methods become more diverse, access and manipulation of digital information is increasingly a critical issue for research in molecular biology. Currently, exploratory research in structural molecular biology is dominated by 3-D representations via computer graphics. Collaboration, both remote and local, is aided by shared viewing of these interactive visual representations of molecular data. Yet, recent advances in the field of human-computer interfaces have not been applied to the technology used by molecular biologists --most work in biomolecular structure and genomics is performed in front of a workstation using a mouse and keyboard as input devices. The tactile and kinesthetic senses provide key perceptual cues to our ability to understand 3-D form and to perform physical manipulations, but are currently under-utilized in molecular biology. Early structure research relied heavily on physical models: Pauling used his newly-invented spacefilling models to predict the basic folding units of protein structures [1] and Watson and Crick used brass-wire molecular models to help them determine the structure of DNA [2], which reconciled decades of genetic data. These researchers "thought with their hands" to produce important scientific results. Current research in molecular biology now focuses on larger assemblies and more complex interactions, for which traditional atomic models are inadequate. Merging physical and virtual objects into an "augmented reality" (AR) environment [3] enables new modes of interaction through the manipulation of tangible models and the complex information they represent [4] . The evolving technology of computer auto-fabrication ("3D printing") now makes it possible to produce physical models for complex molecular assemblies. In this paper we report on an application that demonstrates the use of auto-fabricated tangible models and AR for research in molecular biology to enhance the scientific environment for collaboration and exploration. The physical models are integrated into an augmented reality environment to streamline the interface between human intent, the physical model, and the computational activity. We have developed an AR system that allows virtual 3-D representations generated by our Python Molecular Viewer (PMV) [5] to be overlaid on an auto-fabricated model of the molecule. The precise registration of the virtual objects with the real world is done using the ARToolKit library developed at the University of Washington [6] . While using the system, users can easily change the representation shown, and, for example, access information about molecular properties of the molecules. We will first describe how we create 3D tangible models of a molecular structure from a known atomic structure, then explain the integration of ARToolKit in our Python framework, and finally present some examples. We use PMV [5] both to create our virtual objects and to design our tangible models, simplifying the integration of the models with the virtual environment. PMV is a modular software framework for designing and specifying a wide range of molecular models, including molecular surfaces, extruded volumes, backbone ribbons, and atomic ball-and-stick representations. It allows the design of models at different levels of abstraction for different needs: using representations that focus on molecular shape when large systems and interactions are presented, and incorporating atomic details when needed to look at function at the atomic level. October 10-15,
In the field of molecular biology, Gillet et al. REF augmented physical molecular models with a virtual overlay: The user could change the representation of the digital layer or create new combinations of molecules.
3361291
Augmented Reality with Tangible Auto-Fabricated Models for Molecular Biology Applications
{ "venue": "IEEE Visualization 2004", "journal": "IEEE Visualization 2004", "mag_field_of_study": [ "Computer Science" ] }
Many important problems can be modeled as a system of interconnected entities, where each entity is recording time-dependent observations or measurements. In order to spot trends, detect anomalies, and interpret the temporal dynamics of such data, it is essential to understand the relationships between the different entities and how these relationships evolve over time. In this paper, we introduce the time-varying graphical lasso (TVGL), a method of inferring time-varying networks from raw time series data. We cast the problem in terms of estimating a sparse time-varying inverse covariance matrix, which reveals a dynamic network of interdependencies between the entities. Since dynamic network inference is a computationally expensive task, we derive a scalable messagepassing algorithm based on the Alternating Direction Method of Multipliers (ADMM) to solve this problem in an efficient way. We also discuss several extensions, including a streaming algorithm to update the model and incorporate new observations in real time. Finally, we evaluate our TVGL algorithm on both real and synthetic datasets, obtaining interpretable results and outperforming state-of-the-art baselines in terms of both accuracy and scalability.
Hallac et al. addresses the problem of learning a timevarying graph using time-varying graphical Lasso (TVGL) REF , which combines graphical Lasso with a temporal regularization and finds the solution using alternating direction method of multipliers (ADMM).
3141660
Network Inference via the Time-Varying Graphical Lasso
{ "venue": "KDD '17", "journal": null, "mag_field_of_study": [ "Mathematics", "Computer Science", "Medicine" ] }
AbsmeCIn this paper we present generic distributed algorithms for assembling and repairing shapes using modular self-reeonfiguring robots. The algorithms work io the sliding cube model. Each module independently evaluates a set of local rules using diITerent evaluation models. Two methods are used to determine the comctness of the algorithms--a graph analysis technique which can prove the rule set is eo& for specific instances of the algorithm, and a statistical technique which can produce arbitrary bounds on the likelihood that the rule set functions correctly. An extension of the assembly algorithm can be wed to produce arbitrary non-cantilevered convex shapes without holes. The algorithms have been implemented and evaluated in simulation. Current research in self-reconfiguring robots is focused on desi,@g and building hardware, and developing algw rithms coupled to specific hardware. We are interested in developing architecture-independent control and planning algorithms for such systems. In our previous work we describe distributed controllers for two tasks for self-reconfiguring robots: compliant locomotion gaits and splitting a large robot with a given behavior into smaller robots with the same behavior. We demonstrate a methodology for doing this urork using the sliding cube model, in which modules are represented as cubes. Each module can translate on a substrate of identical cubes and make convex and concave transitions on the substrate. The resulting algorithms are provably correct and can be instantiated easily to a wide range of physical platforms such as the Molecule and Crystal robots built in our lab [4] as well as other robot systems [XI, [141, [IS]. Deriving algorithms in this fashion has several advantages: (1) the algorithms are simpler in this abstract model; (2) the algorithms are easier to analyze in the abstract model; (3) the same basic algorithm can be instantiated fox many different hardware types, thus providing a rigorous framework in which to compare different algorithms and hardware systems; (4) the analyses and correctness proofs will be inherited by the instantiated algorithms: and (5) ultimately this framework will lead to a better understanding of the computational problems that arise in self-reconfiguring robot research. In this paper we extend our previous work by demonstrating distributed control algorithms for synthesizing shapes and repairing holes in them. Our approach is based on four ideas: . Use the simplest abstraction for the robot module that fits with existing robot systems (both in shape and actuation) . Develop distributed algorithms in the form of rules that only require local information . Prove correctness of these algorithms with respect to the task . Instantiate these algorithms onto real systems in a way that preserves the algorithmic propenies The use of local rules for compliant locomotion is maightforward, since locomotion does not require precise global shape control. However, it was unclear whether the exclusive use of local rules would be appropriate for assembly tasks in which a specific goal shape is required. Although each module is provided with the goal description, the possible moves are restricted to those permitted by the rule set-the goal description is only used to determine the proximity to the goal shape. As our assembly results demonstrate, it is possible to construct shapes using only local rules for a certain class of configurations. Our hole repair rule set also uses local rules to fill voids in a multilayer configuration of modules. This is accomplished by modules moving into the void and recruiting neighbor modules to follow them. Local sfate in the modules sinulates message passing to neighbor modules which causes them to move toward the hole. For both the assembly and repair algorithms simulation is used to verify algorithmic correctness, either by graph analysis or by generating a statistical bound on the possible number of erroneous sequences of rule applications. 11. RELATED WORK Self-recomiguring robots were fust proposed in [SI. In this planar system modules were heterogeneous and semi-autonomous. Other research focused on homogeneous systems with non-autonomous modules in two dimensions
In REF the sliding cube model was presented to represent latticestyle modules.
744744
null
null
Wireless body area networks (WBANs) are expected to influence the traditional medical model by assisting caretakers with health telemonitoring. Within WBANs, the transmit power of the nodes should be as small as possible owing to their limited energy capacity but should be sufficiently large to guarantee the quality of the signal at the receiving nodes. When multiple WBANs coexist in a small area, the communication reliability and overall throughput can be seriously affected due to resource competition and interference. We show that the total network throughput largely depends on the WBANs distribution density (λ p ), transmit power of their nodes (P t ), and their carrier-sensing threshold (γ). Using stochastic geometry, a joint carrier-sensing threshold and power control strategy is proposed to meet the demand of coexisting WBANs based on the IEEE 802.15.4 standard. Given different network distributions and carrier-sensing thresholds, the proposed strategy derives a minimum transmit power according to varying surrounding environment. We obtain expressions for transmission success probability and throughput adopting this strategy. Using numerical examples, we show that joint carrier-sensing thresholds and transmit power strategy can effectively improve the overall system throughput and reduce interference. Additionally, this paper studies the effects of a guard zone on the throughput using a Matern hard-core point process (HCPP) type II model. Theoretical analysis and simulation results show that the HCPP model can increase the success probability and throughput of networks.
In REF , using a HCPP type II model, the authors proposed a joint carrier-sensing threshold and power control strategy to meet the demand of coexisting WBANs based on the IEEE 802.15.4 standard, which improves the overall system throughput and reduces interference in one frequency channel.
15459081
Throughput assurance of wireless body area networks coexistence based on stochastic geometry
{ "venue": "PLoS ONE", "journal": "PLoS ONE", "mag_field_of_study": [ "Medicine", "Computer Science" ] }
ABSTRACT In this paper, we consider an underlay cognitive radio system, in which a source in a secondary system transmits information to a full-duplex (FD) wireless-powered destination node in the presence of an eavesdropper. In particular, the destination node is equipped with a single receiving antenna and a single transmitting antenna to enable FD operation. The receiving antenna can simultaneously receive information and energy from the source through power-splitter architecture. The received energy is then used in the transmitting antenna to send jamming signals to degrade the eavesdropper's decoding capacity. Upper and lower bounds of probability of strictly positive secrecy capacity (SPSC) have been derived. Numerical results show that under the condition that the interference from the source and destination at primary user's receiver is smaller than the interference temperature limit, upper and lower bounds merge together to become the exact SPSC. INDEX TERMS Cognitive radio networks, full-duplex, simultaneous wireless information and power transfer, probability of strictly positive secrecy capacity.
Authors in REF considered an underlay cognitive radio system, where a source in a secondary system transmitted information to a full-duplex (FD) wireless EH destination node in the presence of an eavesdropper.
31420607
On physical-layer security in underlay cognitive radio networks with full-duplex wireless-powered secondary system
{ "venue": "IEEE Access", "journal": "IEEE Access", "mag_field_of_study": [ "Computer Science" ] }
Although neural machine translation has made significant progress recently, how to integrate multiple overlapping, arbitrary prior knowledge sources remains a challenge. In this work, we propose to use posterior regularization to provide a general framework for integrating prior knowledge into neural machine translation. We represent prior knowledge sources as features in a log-linear model, which guides the learning process of the neural translation model. Experiments on ChineseEnglish translation show that our approach leads to significant improvements. 1
Zhang et al. REF represent prior knowledge sources as features in a loglinear model and propose to use posterior regularization to provide a general framework for integrating prior knowledge into NMT.
2445030
Prior Knowledge Integration for Neural Machine Translation using Posterior Regularization
{ "venue": "ArXiv", "journal": "ArXiv", "mag_field_of_study": [ "Computer Science" ] }
Abstract-This paper presents a class of routing protocols called road-based using vehicular traffic (RBVT) routing, which outperforms existing routing protocols in city-based vehicular ad hoc networks (VANETs). RBVT protocols leverage real-time vehicular traffic information to create road-based paths consisting of successions of road intersections that have, with high probability, network connectivity among them. Geographical forwarding is used to transfer packets between intersections on the path, reducing the path's sensitivity to individual node movements. For dense networks with high contention, we optimize the forwarding using a distributed receiver-based election of next hops based on a multicriterion prioritization function that takes nonuniform radio propagation into account. We designed and implemented a reactive protocol RBVT-R and a proactive protocol RBVT-P and compared them with protocols representative of mobile ad hoc networks and VANETs. Simulation results in urban settings show that RBVT-R performs best in terms of average delivery rate, with up to a 40% increase compared with some existing protocols. In terms of average delay, RBVT-P performs best, with as much as an 85% decrease compared with the other protocols. Index Terms-Receiver-based next-hop election, road-based routing, vehicular traffic-aware routing.
The Road-Based using Vehicular Traffic (RBVT) routing REF leverages real-time vehicular traffic information to create road-based paths.
3042458
VANET Routing on City Roads Using Real-Time Vehicular Traffic Information
{ "venue": "IEEE Transactions on Vehicular Technology", "journal": "IEEE Transactions on Vehicular Technology", "mag_field_of_study": [ "Computer Science" ] }
Abstract. The field-of-view of standard cameras is very small, which is one of the main reasons that contextual information is not as useful as it should be for object detection. To overcome this limitation, we advocate the use of 360 • full-view panoramas in scene understanding, and propose a whole-room context model in 3D. For an input panorama, our method outputs 3D bounding boxes of the room and all major objects inside, together with their semantic categories. Our method generates 3D hypotheses based on contextual constraints and ranks the hypotheses holistically, combining both bottom-up and top-down context information. To train our model, we construct an annotated panorama dataset and reconstruct the 3D model from single-view using manual annotation. Experiments show that solely based on 3D context without any image region category classifier, we can achieve a comparable performance with the state-of-the-art object detector. This demonstrates that when the FOV is large, context is as powerful as object appearance. All data and source code are available online.
360 • panorama: The seminal work by Zhang et al. REF advocates the use of 360 • panoramas for indoor scene understanding, for the reason that the FOV of 360 • panoramas is much more expansive.
15644143
PanoContext: A Whole-Room 3D Context Model for Panoramic Scene Understanding
{ "venue": "ECCV", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
Despite the impressive improvements achieved by unsupervised deep neural networks in computer vision and NLP tasks, such improvements have not yet been observed in ranking for information retrieval. e reason may be the complexity of the ranking problem, as it is not obvious how to learn from queries and documents when no supervised signal is available. Hence, in this paper, we propose to train a neural ranking model using weak supervision, where labels are obtained automatically without human annotators or any external resources (e.g., click data). To this aim, we use the output of an unsupervised ranking model, such as BM25, as a weak supervision signal. We further train a set of simple yet e ective ranking models based on feed-forward neural networks. We study their e ectiveness under various learning scenarios (point-wise and pair-wise models) and using di erent input representations (i.e., from encoding querydocument pairs into dense/sparse vectors to using word embedding representation). We train our networks using tens of millions of training instances and evaluate it on two standard collections: a homogeneous news collection (Robust) and a heterogeneous large-scale web collection (ClueWeb). Our experiments indicate that employing proper objective functions and le ing the networks to learn the input representation based on weakly supervised data leads to impressive performance, with over 13% and 35% MAP improvements over the BM25 model on the Robust and the ClueWeb collections. Our ndings also suggest that supervised neural ranking models can greatly bene t from pre-training on large amounts of weakly labeled data that can be easily obtained from unsupervised IR models.
The unsupervised retrieval scores, e.g., from BM25, have been used as relevance labels to train neural ranking models REF .
3666085
Neural Ranking Models with Weak Supervision
{ "venue": "SIGIR '17", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
We present Moshe, a novel scalable group membership algorithm built specifically for use in wide area networks (WANs), which can suffer partitions. Moshe is designed with three new significant features that are important in this setting: it avoids delivering views that reflect out-of-date memberships; it requires a single round of messages in the common case; and it employs a client-server design for scalability. Furthermore, Moshe's interface supplies the hooks needed to provide clients with full virtual synchrony semantics. We have implemented Moshe on top of a network event mechanism also designed specifically for use in a WAN. In addition to specifying the properties of the algorithm and proving that this specification is met, we provide empirical results of an implementation of Moshe running over the Internet. The empirical results justify the assumptions made by our design and exhibit good performance. In particular, Moshe terminates within a single communication round over 98% of the time. The experimental results also lead to interesting observations regarding the performance of membership algorithms over the Internet.
The algorithm in REF terminates within one round of message communications over 98% of the running time.
15066208
Moshe: A group membership service for WANs
{ "venue": "TOCS", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
We introduce TensorFlow Agents, an efficient infrastructure paradigm for building parallel reinforcement learning algorithms in TensorFlow. We simulate multiple environments in parallel, and group them to perform the neural network computation on a batch rather than individual observations. This allows the TensorFlow execution engine to parallelize computation, without the need for manual synchronization. Environments are stepped in separate Python processes to progress them in parallel without interference of the global interpreter lock. As part of this project, we introduce BatchPPO, an efficient implementation of the proximal policy optimization algorithm. By open sourcing TensorFlow Agents, we hope to provide a flexible starting point for future projects that accelerates future research in the field.
Hafner et al. REF allow the TensorFlow execution engine to parallelize computation to improve training performance.
10281624
TensorFlow Agents: Efficient Batched Reinforcement Learning in TensorFlow
{ "venue": "ArXiv", "journal": "ArXiv", "mag_field_of_study": [ "Mathematics", "Computer Science" ] }
This paper addresses the problem of video object segmentation, where the initial object mask is given in the first frame of an input video. We propose a novel spatiotemporal Markov Random Field (MRF) model defined over pixels to handle this problem. Unlike conventional MRF models, the spatial dependencies among pixels in our model are encoded by a Convolutional Neural Network (CNN). Specifically, for a given object, the probability of a labeling to a set of spatially neighboring pixels can be predicted by a CNN trained for this specific object. As a result, higher-order, richer dependencies among pixels in the set can be implicitly modeled by the CNN. With temporal dependencies established by optical flow, the resulting MRF model combines both spatial and temporal cues for tackling video object segmentation. However, performing inference in the MRF model is very difficult due to the very highorder dependencies. To this end, we propose a novel CNNembedded algorithm to perform approximate inference in the MRF. This algorithm proceeds by alternating between a temporal fusion step and a feed-forward CNN step. When initialized with an appearance-based one-shot segmentation CNN, our model outperforms the winning entries of the DAVIS 2017 Challenge, without resorting to model ensembling or any dedicated detectors.
Based on optical flow and a spatial CNN, a pixel-level spatio-temporal Markov random field (MRF) is built in REF where approximate inference is achieved by using a CNN.
4322524
CNN in MRF: Video Object Segmentation via Inference in a CNN-Based Higher-Order Spatio-Temporal MRF
{ "venue": "2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition", "journal": "2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition", "mag_field_of_study": [ "Computer Science" ] }
Wireless sensor networks have increasingly become contributors of very large amounts of data. The recent deployment of wireless sensor networks in Smart City infrastructures has led to very large amounts of data being generated each day across a variety of domains, with applications including environmental monitoring, healthcare monitoring and transport monitoring. To take advantage of the increasing amounts of data there is a need for new methods and techniques for effective data management and analysis to generate information that can assist in managing the utilization of resources intelligently and dynamically. Through this research, a Multi-Level Smart City architecture is proposed based on semantic web technologies and Dempster-Shafer uncertainty theory. The proposed architecture is described and explained in terms of its functionality and some real-time context-aware scenarios.
In REF , a multi-layer smart city architecture has been presented.
207425260
Smart City Architecture and its Applications Based on IoT
{ "venue": "ANT/SEIT", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
With the advent of 5G cellular systems there is an increased interest in exploring higher frequency bands above 6 GHz. At these frequencies, beamforming appears as a straightforward solution to overcome higher path loss thereby altering the Doppler characteristics of the received waves. Higher frequencies can suffer from strong Doppler impairments because of the linear dependency of Doppler shift with carrier frequency, which makes them challenging to use in high-mobility scenarios, particularly Vehicular-to-Infrastructure (V2I) communications. Therefore, the impact of beamforming on the Doppler characteristics of the received signals is of utter importance for future V2I systems. This paper presents a theoretical analysis of the Doppler power spectrum in the presence of beamforming at the transmit and/or the receive sides. Further approximations are made for the resulting Doppler spread and Doppler shift when the receive beam width is sufficiently small, and a possible design solution is presented to control the Doppler spread in V2I systems. The results can be of key importance in waveform and air interface design for V2I systems.
Lorca et al. presented a theoretical analysis of the Doppler power spectrum in the presence of beamforming at the transmitter and/or the receiver in V2I systems REF .
1912115
On Overcoming the Impact of Doppler Spectrum in Millimeter-Wave V2I Communications
{ "venue": "2017 IEEE Globecom Workshops (GC Wkshps)", "journal": "2017 IEEE Globecom Workshops (GC Wkshps)", "mag_field_of_study": [ "Computer Science", "Mathematics" ] }
Abstract-This paper introduces an adaptive sampling algorithm for a mobile sensor network to estimate a scalar field. The sensor network consists of static nodes and one mobile robot. The static nodes are able to take sensor readings continuously in place, while the mobile robot is able to move and sample at multiple locations. The measurements from the robot and the static nodes are used to reconstruct an underlying scalar field. The algorithm presented in this paper accepts the measurements made by the static nodes as inputs and computes a path for the mobile robot which minimizes the integrated mean square error of the reconstructed field subject to the constraint that the robot has limited energy. We assume that the field does not change when robot is taking samples. In addition to simulations, we have validated the algorithm on a robotic boat and a system of static buoys operating in a lake over several km of traversed distance while reconstructing the temperature field of the lake surface.
In REF , a robotic boat supplements a static sensor network to reduce the field reconstruction error, where the boat's movement is guided by the measurements of the sensor network.
9456425
Adaptive Sampling for Estimating a Scalar Field using a Robotic Boat and a Sensor Network
{ "venue": "Proceedings 2007 IEEE International Conference on Robotics and Automation", "journal": "Proceedings 2007 IEEE International Conference on Robotics and Automation", "mag_field_of_study": [ "Engineering", "Computer Science" ] }
For a property P and a sub-property P , we say that P is Ppartially testable with q queries if there exists an algorithm that distinguishes, with high probability, inputs in P from inputs -far from P , using q queries. Some natural properties require many queries to test, but can be partitioned into a small number of subsets for which they are partially testable with very few queries, sometimes even a number independent of the input size. For properties over 0, 1, the notion of being thus partitionable ties in closely with Merlin-Arthur proofs of Proximity (MAPs) as defined independently in [14] ; a partition into r partially-testable properties is the same as a Merlin-Arthur system where the proof consists of the identity of one of the r partially-testable properties, giving a 2-way translation to an O(log r) size proof. Our main result is that for some low complexity properties a partition as above cannot exist, and moreover that for each of our properties there does not exist even a single subproperty featuring both a large size and a query-efficient partial test, in particular improving the lower bound set in [14] . For this we use neither the traditional Yao-type arguments nor the more recent communication complexity method, but open up a new approach for proving lower bounds. First, we use entropy analysis, which allows us to apply our arguments directly to 2-sided tests, thus avoiding the cost of the conversion in [14] from 2-sided to 1-sided tests. Broadly speaking we use "distinguishing instances" of a supposed test to show that a uniformly random choice of a member of the sub-property has "low entropy areas", ultimately leading to it having a low total entropy and hence having a small base set. Additionally, to have our arguments apply to adaptive tests, we use a mechanism of "rearranging" the input bits (through a decision tree that adaptively reads the entire input) to expose the low entropy that would otherwise not be apparent. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from permissions@acm.org. We also explore the possibility of a connection in the other direction, namely whether the existence of a good partition (or MAP) can lead to a relatively query-efficient standard property test. We provide some preliminary results concerning this question, including a simple lower bound on the possible trade-off. Our second major result is a positive trade-off result for the restricted framework of 1-sided proximity oblivious tests. This is achieved through the construction of a "universal tester" that works the same for all properties admitting the restricted test. Our tester is very related to the notion of sample-based testing (for a non-constant number of queries) as defined by Goldreich and Ron in [13] . In particular it partially resolves an open problem raised by [13] .
Relation to Partial Testing REF .
15073214
Partial tests, universal tests and decomposability
{ "venue": "ITCS '14", "journal": null, "mag_field_of_study": [ "Computer Science", "Mathematics" ] }
The paper studies the case of a sensor node which is operating with the power generated by an environmental source. We present our model of an energy driven scheduling scenario that is characterized by the capacity of the node's energy storage, the deadlines and the power dissipation of the tasks to be performed. Since the execution of these tasks requires a certain amount of energy as well as time, we show that the complexity of finding useful scheduling strategies is significantly increased compared to conventional real-time scheduling. We state online scheduling algorithms that jointly account for constraints arising from both the energy and time domain. In order to demonstrate the benefits of our algorithms, we compare them by means of simulation with the classical Earliest Deadline First Algorithm. Wireless sensor networks have been the subject of intensive research over the past several years. As for many other battery-operated embedded systems, a sensor's operating time is a crucial design parameter. As electronic systems continue to shrink, however, less energy is storable on-board. Research continues to develop higher energy-density batteries and supercapacitors, but the amount of energy available still severely limits the system's lifespan. Recently, energy harvesting has emerged as viable option to power sensor nodes: If nodes are equipped with energy transducers like e.g. solar cells, the generated energy may increase the autonomy of the nodes significantly. In [6] , technologies have been discussed how a sensor node may extract energy from its physical environment. Moreover, several prototypes (e.g. [2, 3] ) have been presented which demonstrate both feasibility and usefulness of sensors nodes which are powered by solar or vibrational energy.
A scheduling algorithm that relies on the battery capacity of the sensors is presented by Moser et al. REF .
13513552
LAZY SCHEDULING FOR ENERGY HARVESTING SENSOR NODES
{ "venue": "DIPES", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
Background: Computational simulation using numerical analysis methods can help to assess the complex biomechanical and functional characteristics of the mitral valve (MV) apparatus. It is important to correctly determine physical contact interaction between the MV apparatus components during computational MV evaluation. We hypothesize that leaflet-to-chordae contact interaction plays an important role in computational MV evaluation, specifically in quantitating the degree of leaflet coaptation directly related to the severity of mitral regurgitation (MR). In this study, we have performed dynamic finite element simulations of MV function with and without leaflet-to-chordae contact interaction, and determined the effect of leaflet-to-chordae contact interaction on the computational MV evaluation. Methods: Computational virtual MV models were created using the MV geometric data in a patient with normal MV without MR and another with pathologic MV with MR obtained from 3D echocardiography. Computational MV simulation with full contact interaction was specified to incorporate entire physically available contact interactions between the leaflets and chordae tendineae. Computational MV simulation without leaflet-to-chordae contact interaction was specified by defining the anterior and posterior leaflets as the only contact inclusion. Results: Without leaflet-to-chordae contact interaction, the computational MV simulations demonstrated physically unrealistic contact interactions between the leaflets and chordae. With leaflet-to-chordae contact interaction, the anterior marginal chordae retained the proper contact with the posterior leaflet during the entire systole. The size of the non-contact region in the simulation with leaflet-tochordae contact interaction was much larger than for the simulation with only leaflet-to-leaflet contact. We have successfully demonstrated the effect of leaflet-to-chordae contact interaction on determining leaflet coaptation in computational dynamic MV evaluation. We found that physically realistic contact interactions between the leaflets and chordae should be considered to accurately quantitate leaflet coaptation for MV simulation. Computational evaluation of MV function that allows precise quantitation of leaflet coaptation has great potential to better quantitate the severity of MR.
For example, Rim et al. REF studied the effect of leaflet-to-chordae contact interaction and visualized the result as a color map.
18695562
Effect of leaflet-to-chordae contact interaction on computational mitral valve evaluation
{ "venue": "BioMedical Engineering OnLine", "journal": "BioMedical Engineering OnLine", "mag_field_of_study": [ "Computer Science", "Medicine" ] }
Abstract. We present a framework for model checking concurrent software systems which incorporates both states and events. Contrary to other state/event approaches, our work also integrates two powerful verification techniques, counterexample-guided abstraction refinement and compositional reasoning. Our specification language is a state/event extension of linear temporal logic, and allows us to express many properties of software in a concise and intuitive manner. We show how standard automata-theoretic LTL model checking algorithms can be ported to our framework at no extra cost, enabling us to directly benefit from the large body of research on efficient LTL verification. We have implemented this work within our concurrent C model checker, MAGIC, and checked a number of properties of OpenSSL-0.9.6c (an open-source implementation of the SSL protocol) and Micro-C OS version 2 (a real-time operating system for embedded applications). Our experiments show that this new approach not only eases the writing of specifications, but also boasts important gains both in space and in time during verification. In certain cases, we even encountered specifications that could not be verified using traditional pure event-based or state-based approaches, but became tractable within our state/event framework. We report a bug in the source code of Micro-C OS version 2, which was found during our experiments.
This formulation has the advantage that one can adapt techniques that are used for model checking of temporal properties of concurrent software systems, including counterexample-guided abstraction refinement and compositional reasoning REF .
11384322
State/event-based software model checking
{ "venue": "In Integrated Formal Methods", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
Abstract-We present a programming-by-demonstration framework for generically extracting the relevant features of a given task and for addressing the problem of generalizing the acquired knowledge to different contexts. We validate the architecture through a series of experiments, in which a human demonstrator teaches a humanoid robot simple manipulatory tasks. A probability-based estimation of the relevance is suggested by first projecting the motion data onto a generic latent space using principal component analysis. The resulting signals are encoded using a mixture of Gaussian/Bernoulli distributions (Gaussian mixture model/Bernoulli mixture model). This provides a measure of the spatio-temporal correlations across the different modalities collected from the robot, which can be used to determine a metric of the imitation performance. The trajectories are then generalized using Gaussian mixture regression. Finally, we analytically compute the trajectory which optimizes the imitation metric and use this to generalize the skill to different contexts. Index Terms-Gaussian mixture model (GMM), human motion subspace, human-robot interaction (HRI), learning by imitation, metric of imitation, programming by demonstration (PbD).
In REF is presented a programming by demonstration framework where relevant features of a given task are learned and then generalized for different contexts.
5679082
On Learning, Representing, and Generalizing a Task in a Humanoid Robot
{ "venue": "IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics)", "journal": "IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics)", "mag_field_of_study": [ "Computer Science", "Medicine" ] }
Abstract-In this paper an enhanced Layer-2 multi-hop wireless network implementation for Infrastructure based Wireless Mesh Networks is presented. This work combines the flexibility of Layer-2 Wireless Bridging with the dynamic self-configuring capabilities of MANET routing. The main contribution of this paper is an investigation of the issues encountered when applying a pure bridging based solution to wireless multi-hop networks and the development of several mechanisms to overcome these problems. This work was implemented and deployed in a real testbed environment using Routerboard hardware and utilising a number of open-source network tools in accordance with the needs of our platform. The developed testbed incorporates self-healing and self-configuration features without requiring a traditional MANET routing protocol. Instead the 802.11 beacon frames sent by the Access Points were extended with link information to allow optimal construction of the mesh topology. Results are presented which demonstrate the automated topology construction mechanism. Further results also show the enhancements made to the normal 802.11 Layer-2 mobility mechanism.
Moreover, authors in REF study the consequences of using pure bridging based solutions in wireless networks and present an enhanced bridged-based implementation for providing dynamic, self-configuration and self-healing features avoiding a routing protocol.
18106023
An enhanced bridged-based multi-hop wireless network implementation
{ "venue": "2010 The 5th Annual ICST Wireless Internet Conference (WICON)", "journal": "2010 The 5th Annual ICST Wireless Internet Conference (WICON)", "mag_field_of_study": [ "Computer Science" ] }
The increasing importance of food safety has made traceability a crucial issue in the agribusiness industry. In this article, we have analysed the factors that shape the buyer-supplier relationships, and how they influence the traceability of raw materials. In order to do so, first, we have made a literature review to develop an analytical framework. Next, we have carried out four case studies on vegetable firms with the purpose of uncovering the variables that characterise buyer-supplier relationships, and its influence on traceability in this sector. Finally, we have compared the observed links with the conceptual framework derived from the literature in order to build and improved model.
Alvare et al. REF studied factors that shape client-supplier relationships and their impact on food traceability.
53452169
Buyer–supplier relationship's influence on traceability implementation in the vegetable industry
{ "venue": null, "journal": "Journal of Purchasing and Supply Management", "mag_field_of_study": [ "Business" ] }
Abstract-The existence of a polynomial kernel for Odd Cycle Transversal was a notorious open problem in parameterized complexity. Recently, this was settled by the present authors (Kratsch and Wahlström, SODA 2012), with a randomized polynomial kernel for the problem, using matroid theory to encode flow questions over a set of terminals in size polynomial in the number of terminals (rather than the total graph size, which may be superpolynomially larger). In the current work we further establish the usefulness of matroid theory to kernelization by showing applications of a result on representative sets due to Lovász (Combinatorial Surveys 1977) and Marx (TCS 2009). We show how representative sets can be used to give a polynomial kernel for the elusive Almost 2-sat problem (where the task is to remove at most k clauses to make a 2-CNF formula satisfiable), solving a major open problem in kernelization. We further apply the representative sets tool to the problem of finding irrelevant vertices in graph cut problems, that is, vertices which can be made undeletable without affecting the status of the problem. This gives the first significant progress towards a polynomial kernel for the Multiway Cut problem; in particular, we get a polynomial kernel for Multiway Cut instances with a bounded number of terminals. Both these kernelization results have significant spin-off effects, producing the first polynomial kernels for a range of related problems. More generally, the irrelevant vertex results have implications for covering min-cuts in graphs. In particular, given a directed graph and a set of terminals, we can find a set of size polynomial in the number of terminals (a cut-covering set) which contains a minimum vertex cut for every choice of sources and sinks from the terminal set. Similarly, given an undirected graph and a set of terminals, we can find a set of vertices, of size polynomial in the number of terminals, which contains a minimum multiway cut for every partition of the terminals into a bounded number of sets. Both results are polynomial time. We expect this to have further applications; in particular, we get direct, reduction rule-based kernelizations for all problems above, in contrast to the indirect compressionbased kernel previously given for Odd Cycle Transversal. All our results are randomized, with failure probabilities which can be made exponentially small in the size of the input, due to needing a representation of a matroid to apply the representative sets tool.
Recently, a polynomial kernel was given for EDGE and NODE MULTIWAY CUT for a constant number of terminals or deletable terminals REF ; nevertheless, the question for a polynomial kernel in the general case remains open.
16259614
Representative Sets and Irrelevant Vertices: New Tools for Kernelization
{ "venue": "2012 IEEE 53rd Annual Symposium on Foundations of Computer Science", "journal": "2012 IEEE 53rd Annual Symposium on Foundations of Computer Science", "mag_field_of_study": [ "Mathematics", "Computer Science" ] }
The emergence and wide-spread use of online social networks has led to a dramatic increase on the availability of social activity data. Importantly, this data can be exploited to investigate, at a microscopic level, some of the problems that have captured the attention of economists, marketers and sociologists for decades, such as, e.g., product adoption, usage and competition. In this paper, we propose a continuous-time probabilistic model, based on temporal point processes, for the adoption and frequency of use of competing products, where the frequency of use of one product can be modulated by those of others. This model allows us to efficiently simulate the adoption and recurrent usages of competing products, and generate traces in which we can easily recognize the effect of social influence, recency and competition. We then develop an inference method to efficiently fit the model parameters by solving a convex program. The problem decouples into a collection of smaller subproblems, thus scaling easily to networks with hundred of thousands of nodes. We validate our model over synthetic and real diffusion data gathered from Twitter, and show that the proposed model does not only provides a good fit to the data and more accurate predictions than alternatives but also provides interpretable model parameters, which allow us to gain insights into some of the factors driving product adoption and frequency of use.
Valera and Gomez-Rodriguez REF developed a method to predict the adoption and use frequency of similar products in social networks.
15024920
Modeling Adoption and Usage of Competing Products
{ "venue": "2015 IEEE International Conference on Data Mining", "journal": "2015 IEEE International Conference on Data Mining", "mag_field_of_study": [ "Computer Science" ] }
Motivated by applications to sensor, peer-to-peer, and adhoc networks, we study the problem of computing functions of values at the nodes in a network in a totally distributed manner. In particular, we consider separable functions, which can be written as linear combinations of functions of individual variables. Known iterative algorithms for averaging can be used to compute the normalized values of such functions, but these algorithms do not extend in general to the computation of the actual values of separable functions. The main contribution of this paper is the design of a distributed randomized algorithm for computing separable functions based on properties of exponential random variables. We bound the running time of our algorithm in terms of the running time of an information spreading algorithm used as a subroutine by the algorithm. Since we are interested in totally distributed algorithms, we consider a randomized gossip mechanism for information spreading as the subroutine. Combining these algorithms yields a complete and simple distributed algorithm for computing separable functions. The second contribution of this paper is an analysis of the information spreading time of the gossip algorithm. This analysis yields an upper bound on the information spreading time, and therefore a corresponding upper bound on the running time of the algorithm for computing separable functions, in terms of the conductance of an appropriate stochastic matrix. These bounds imply that, for a class of graphs with small spectral gap (such as grid graphs), the time used by our algorithm to compute averages is of a smaller order than the time required for the computation of averages by a known iterative gossip scheme [5] .
Most of the literature in gossip algorithm setting computes functions like the average, sum or separable functions REF .
949717
Computing separable functions via gossip
{ "venue": "PODC '06", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
Abstrad-We study the connectivity and capacity of tinite ares ad hoc wireless networks, with an increasing number of nodes (dense networks). We find that the properties of the network stmngly depend on the shape of the attenuation function. For power law attenuation functions, connectivity scales, and the availahle rate per node is known to decrease like I/,,%. On the contrary, if the attenuation function does not have a singularity at the origin and is nnifiwrnly hounded. we irhtain hounds on the percolation domain for large node densities, which show that either the network heconies dkonnected, or the available rate per node decreases like l / n .
In REF , trade-off between connectivity and capacity of dense networks is studied.
12961161
Connectivity vs capacity in dense ad hoc networks
{ "venue": "IEEE INFOCOM 2004", "journal": "IEEE INFOCOM 2004", "mag_field_of_study": [ "Mathematics", "Computer Science" ] }
Automatic neural architecture design has shown its potential in discovering powerful neural network architectures. Existing methods, no matter based on reinforcement learning or evolutionary algorithms (EA), conduct architecture search in a discrete space, which is highly inefficient. In this paper, we propose a simple and efficient method to automatic neural architecture design based on continuous optimization. We call this new approach neural architecture optimization (NAO). There are three key components in our proposed approach: (1) An encoder embeds/maps neural network architectures into a continuous space. (2) A predictor takes the continuous representation of a network as input and predicts its accuracy. (3) A decoder maps a continuous representation of a network back to its architecture. The performance predictor and the encoder enable us to perform gradient based optimization in the continuous space to find the embedding of a new architecture with potentially better accuracy. Such a better embedding is then decoded to a network by the decoder. Experiments show that the architecture discovered by our method is very competitive for image classification task on CIFAR-10 and language modeling task on PTB, outperforming or on par with the best results of previous architecture search methods with a significantly reduction of computational resources. Specifically we obtain 2.11% test set error rate for CIFAR-10 image classification task and 56.0 test set perplexity of PTB language modeling task. The best discovered architectures on both tasks are successfully transferred to other tasks such as CIFAR-100 and WikiText-2. Furthermore, combined with the recent proposed weight sharing mechanism, we discover powerful architecture on CIFAR-10 (with error rate 3.53%) and on PTB (with test set perplexity 56.6), with very limited computational resources (less than 10 GPU hours) for both tasks.
More recently, Luo et al. propose the neural architecture optimization (NAO) REF method to perform the architecture search on continuous space by exploiting the encoding-decoding technique.
52071151
Neural Architecture Optimization
{ "venue": "NeurIPS", "journal": null, "mag_field_of_study": [ "Computer Science", "Mathematics" ] }
Featured Application: In this paper, we proposed a classification model of Tibetan medical syndrome based on atomic classification association rules to provide effective decision-making support for the diagnosis and treatment of common plateau diseases more scientifically. Abstract: Classification association rules that integrate association rules with classification are playing an important role in data mining. However, the time cost on constructing the classification model, and predicting new instances, will be long, due to the large number of rules generated during the mining of association rules, which also will result in the large system consumption. Therefore, this paper proposed a classification model based on atomic classification association rules, and applied it to construct the classification model of a Tibetan medical syndrome for the common plateau disease called Chronic Atrophic Gastritis. Firstly, introduce the idea of "relative support", and use the constraint-based Apriori algorithm to mine the strong atomic classification association rules between symptoms and syndrome, and the knowledge base of Tibetan medical clinics will be constructed. Secondly, build the classification model of the Tibetan medical syndrome after pruning and prioritizing rules, and the idea of "partial classification" and "first easy to post difficult" strategy are introduced to realize the prediction of this Tibetan medical syndrome. Finally, validate the effectiveness of the classification model, and compare with the CBA algorithm and four traditional classification algorithms. The experimental results showed that the proposed method can realize the construction and classification of the classification model of the Tibetan medical syndrome in a shorter time, with fewer but more understandable rules, while ensuring a higher accuracy with 92.8%.
Zhu et al. REF proposed a classification model based on atomic classification association rules.
149448569
Research on Classification of Tibetan Medical Syndrome in Chronic Atrophic Gastritis
{ "venue": null, "journal": "Applied Sciences", "mag_field_of_study": [ "Engineering" ] }
Much work on idioms has focused on type identification, i.e., determining whether a sequence of words can form an idiomatic expression. Since an idiom type often has a literal interpretation as well, token classification of potential idioms in context is critical for NLP. We explore the use of informative prior knowledge about the overall syntactic behaviour of a potentially-idiomatic expression (type-based knowledge) to determine whether an instance of the expression is used idiomatically or literally (tokenbased knowledge). We develop unsupervised methods for the task, and show that their performance is comparable to that of state-of-the-art supervised techniques.
REF , for example, use prior knowledge about the overall syntactic behavior of an idiomatic expression to determine whether an instance of the expression is used literally or idiomatically.
235425
Pulling their Weight: Exploiting Syntactic Forms for the Automatic Identification of Idiomatic Expressions in Context
{ "venue": "Workshop on A Broader Perspective on Multiword Expressions", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
With the increasing development of Web 2.0, such as social media and online businesses, the need for perception of opinions, attitudes, and emotions grows rapidly. Sentiment analysis, the topic studying such subjective feelings expressed in text, has attracted significant attention from both the research community and industry. Although we have known sentiment analysis as a task of mining opinions expressed in text and analyzing the entailed sentiments and emotions, so far the task is still vaguely defined in the research literature because it involves many overlapping concepts and sub-tasks. Because this is an important area of scientific research, the field needs to clear this vagueness and define various directions and aspects in detail, especially for students, scholars, and developers new to the field. In fact, the field includes numerous natural language processing tasks with different aims (such as sentiment classification, opinion information extraction, opinion summarization, sentiment retrieval, etc.) and these have multiple solution paths. Bing Liu has done a great job in this book in providing a thorough exploration and an anatomy of the sentiment analysis problem and conveyed a wealth of knowledge about different aspects of the field.
Sentiment analysis REF .
2408452
null
null
Identifying complex words (CWs) is an important, yet often overlooked, task within lexical simplification (The process of automatically replacing CWs with simpler alternatives). If too many words are identified then substitutions may be made erroneously, leading to a loss of meaning. If too few words are identified then those which impede a user's understanding may be missed, resulting in a complex final text. This paper addresses the task of evaluating different methods for CW identification. A corpus of sentences with annotated CWs is mined from Simple Wikipedia edit histories, which is then used as the basis for several experiments. Firstly, the corpus design is explained and the results of the validation experiments using human judges are reported. Experiments are carried out into the CW identification techniques of: simplifying everything, frequency thresholding and training a support vector machine. These are based upon previous approaches to the task and show that thresholding does not perform significantly differently to the more naïve technique of simplifying everything. The support vector machine achieves a slight increase in precision over the other two methods, but at the cost of a dramatic trade off in recall.
Further experiments in REF show that a more resource-intensive thresholdbased approach does not perform significantly differently on this dataset to a more naïve technique of simplifying everything, while an SVM classifier performs better in terms of precision but does so at the cost of a much lower recall.
17679719
A Comparison of Techniques to Automatically Identify Complex Words.
{ "venue": "ACL", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
Statistics comes in two main flavors: frequentist and Bayesian. For historical and technical reasons, frequentist statistics have traditionally dominated empirical data analysis, and certainly remain prevalent in empirical software engineering. This situation is unfortunate because frequentist statistics suffer from a number of shortcomings-such as lack of flexibility and results that are unintuitive and hard to interpret-that curtail their effectiveness when dealing with the heterogeneous data that is increasingly available for empirical analysis of software engineering practice. In this paper, we pinpoint these shortcomings, and present Bayesian data analysis techniques that provide tangible benefits-as they can provide clearer results that are simultaneously robust and nuanced. After a short, high-level introduction to the basic tools of Bayesian statistics, we present the reanalysis of two empirical studies on the effectiveness of automatically generated tests and the performance of programming languages, respectively. By contrasting the original frequentist analyses with our new Bayesian analyses, we demonstrate the concrete advantages of the latter. To conclude we advocate a more prominent role for Bayesian statistical techniques in empirical software engineering research and practice. Availability. All machine-readable data and analysis scripts used in this paper's analyses are freely available online at https://bitbucket.org/caf/bda-in-ese Empirical research in software engineering. Statistical analysis of empirical data has become commonplace in software engineering research [84, 3, 36] , and it is even making its way into software development practices [45] . As we discuss below, the overwhelming majority of statistical techniques that are being used in software engineering empirical research are, however, of the frequentist kind, with Bayesian statistics hardly even mentioned. Of course, Bayesian statistics is a fundamental component of many machine learning techniques [7, 39] ; as such, it is used in software engineering research indirectly whenever machine learning is used. In this paper, however, we are 3 concerned with the direct usage of statistics to analyze empirical data from the scientific perspective-a pursuit that seems mainly confined to frequentist techniques in software engineering [36] . As we argue in the rest of the paper, this is a lost opportunity because Bayesian techniques do not suffer from several limitations of frequentist ones, and can support rich, robust analyses in several situations. To validate the impression that Bayesian statistics are not normally used in empirical software engineering, we carried out a small literature review of ICSE papers. 1 We selected all papers from the main research track of the latest six editions of the International Conference on Software Engineering (ICSE 2013 to ICSE 2018) that mention "empirical" in their title or in their section's name in the proceedings. This gave 25 papers, from which we discarded one [76] that turned out not to be an empirical study. The experimental data in the remaining 24 papers come from various sources: the output of analyzers and other tools [16, 17, 18, 55, 62] , the mining of repositories of software and other artifacts [13, 48, 50, 70, 87, 88] , the outcome of controlled experiments involving human subjects [61, 79, 85] , interviews and surveys [5, 8, 23, 44, 46, 49, 65, 72] , and a literature review [74] . As one would expect from a top-tier venue like ICSE, the 24 papers follow recommended practices in reporting and analyzing data, using significance testing (6 papers), effect sizes (5 papers), correlation coefficients (5 papers), frequentist regression (2 papers), and visualization in charts or tables (23 papers). None of the papers, however, uses Bayesian statistics. In fact, no paper but two [23, 87] even mentions the terms "Bayes" or "Bayesian". One of the exceptions [87] only cites Bayesian machine-learning techniques used in related work to which it compares. The other exception [23] includes a presentation of the two views of frequentist and Bayesian statistics-with a critique of pvalues similar to the one we make in Sect. 2.2-but does not show how the latter can be used in practice. The aim of [23] is investigating the relationship between empirical findings in software engineering and the actual beliefs of programmers about the same topics. To this end, it is based on a survey of programmers whose responses are analyzed using frequentist statistics; Bayesian statistics is mentioned to frame the discussion about the relationship between evidence and beliefs, but does not feature past the introductory second section. Our paper has a more direct aim: to concretely show how Bayesian analysis can be applied in practice in empirical software engineering research, as an alternative to frequentist statistics; thus, its scope is complementary to [23]'s. More generally, we are not aware of any direct application of Bayesian data analysis to empirical software engineering data with the exception of [28, 29] and [26] . The technical report [28] and its short summary [29] are our preliminary investigations along the lines of the present paper. Ernst [26] presents a conceptual replication of an existing study to argue the analytical effectiveness of multilevel Bayesian models.
Only recently, Furia et al. REF propose such guidelines after reanalysing two empirical studies with Bayesian techniques revealing its advantages in providing clearer results that are simultaneously robust and nuanced.
53293439
Bayesian Data Analysis in Empirical Software Engineering Research
{ "venue": "ArXiv", "journal": "ArXiv", "mag_field_of_study": [ "Computer Science", "Mathematics" ] }
Abstract-Traditionally, Markov models have been used to study multiserver systems using exhaustive or gated service. In addition, exhaustive-limited and gate-limited models have also been used in communication systems to reduce overall latency. Recently the authors have proposed a new Markov Chain approach to study gate-limited service. Multiqueue systems such as polling systems, in which the server serves various queues have also been extensively studied but as a separate branch of queueing theory. This paper proposes to describe multiqueue systems in terms of a new Markov Chain called the Zero-Server Markov Chain (ZSMC). The model is used to derive a formula for the waiting times in an exhaustive polling system. An intuitive result is obtained and this is used to develop an appoximate method which works well over normal operational ranges.
Mapp et al. REF have proposed to describe multiqueue systems in terms of a new Markov Chain called the Zero-Server Markov Chain (ZSMC).
7435103
Exploring a New Markov Chain Model for Multiqueue Systems
{ "venue": "2010 12th International Conference on Computer Modelling and Simulation", "journal": "2010 12th International Conference on Computer Modelling and Simulation", "mag_field_of_study": [ "Computer Science" ] }
Abstract-Rapidly-exploring random trees (RRTs) are popular in motion planning because they find solutions efficiently to single-query problems. Optimal RRTs (RRT*s) extend RRTs to the problem of finding the optimal solution, but in doing so asymptotically find the optimal path from the initial state to every state in the planning domain. This behaviour is not only inefficient but also inconsistent with their single-query nature. For problems seeking to minimize path length, the subset of states that can improve a solution can be described by a prolate hyperspheroid. We show that unless this subset is sampled directly, the probability of improving a solution becomes arbitrarily small in large worlds or high state dimensions. In this paper, we present an exact method to focus the search by directly sampling this subset. The advantages of the presented sampling technique are demonstrated with a new algorithm, Informed RRT*. This method retains the same probabilistic guarantees on completeness and optimality as RRT* while improving the convergence rate and final solution quality. We present the algorithm as a simple modification to RRT* that could be further extended by more advanced path-planning algorithms. We show experimentally that it outperforms RRT* in rate of convergence, final solution cost, and ability to find difficult passages while demonstrating less dependence on the state dimension and range of the planning problem.
As a further improvement to RRT*, informed RRT* was introduced by REF .
12233239
Informed RRT*: Optimal Sampling-based Path Planning Focused via Direct Sampling of an Admissible Ellipsoidal Heuristic
{ "venue": "2014 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2014), pp. 2997-3004, 14-18 Sept. 2014", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
The Web browser is a killer app on mobile devices such as smartphones. However, the user experience of mobile Web browsing is undesirable because of the slow resource loading. To improve the performance of Web resource loading, caching has been adopted as a key mechanism. However, the existing passive measurement studies cannot comprehensively characterize the performance of mobile Web caching. For example, most of these studies mainly focus on client-side implementations but not server-side configurations, suffer from biased user behaviors, and fail to study "miscached" resources. To address these issues, in this paper, we present a proactive approach for a comprehensive measurement study on mobile Web cache performance. The key idea of our approach is to proactively crawl resources from hundreds of websites periodically with a fine-grained time interval. Thus, we are able to uncover the resource update history and cache configurations at the server side, and analyze the cache performance in various time granularities. Based on our collected data, we build a new cache analysis model and study the upper bound of how high percentage of resources could potentially be cached and how effective the caching works in practice. We report detailed analysis results of different websites and various types of Web resources, and identify the problems caused by unsatisfactory cache performance. In particular, we identify two major problems -Redundant Transfer and Miscached Resource, which lead to unsatisfactory cache performance. We investigate three main root causes: Same Content, Heuristic Expiration, and Conservative Expiration Time, and discuss what mobile Web developers can do to mitigate those problems.
Our previous work REF adopted a proactive approach to measure the performance of mobile Web cache and found that more than 50% of resource requests are redundant on average for 55 popular mobile websites.
2306791
Measurement and Analysis of Mobile Web Cache Performance
{ "venue": "WWW '15", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
The resource limitation of multi-access edge computing (MEC) is one of the major issues in order to provide low-latency high-reliability computing services for Internet of Things (IoT) devices. Moreover, with the steep rise of task requests from IoT devices, the requirement of computation tasks needs dynamic scalability while using the potential of offloading tasks to mobile volunteer nodes (MVNs). We, therefore, propose a scalable vehicle-assisted MEC (SVMEC) paradigm, which cannot only relieve the resource limitation of MEC but also enhance the scalability of computing services for IoT devices and reduce the cost of using computing resources. In the SVMEC paradigm, a MEC provider can execute its users' tasks by choosing one of three ways: (i) Do itself on local MEC, (ii) offload to the remote cloud, and (iii) offload to the MVNs. We formulate the problem of joint node selection and resource allocation as a Mixed Integer Nonlinear Programming (MINLP) problem, whose major objective is to minimize the total computation overhead in terms of the weighted-sum of task completion time and monetary cost for using computing resources. In order to solve it, we adopt alternative optimization techniques by decomposing the original problem into two sub-problems: Resource allocation sub-problem and node selection sub-problem. Simulation results demonstrate that our proposed scheme outperforms the existing schemes in terms of the total computation overhead.
Qui et al. REF studied task offloading to a MEC server in order to extend resource capacity by hiring resources from cloud computing resources and vehicular nodes in order to minimize the total computing overhead, including latency and monetary cost, of using computer resources.
67772211
Joint Node Selection and Resource Allocation for Task Offloading in Scalable Vehicle-Assisted Multi-Access Edge Computing
{ "venue": "Symmetry", "journal": "Symmetry", "mag_field_of_study": [ "Mathematics", "Computer Science" ] }
Abstract-With the rise of Internet computing, outsourcing difficult computational tasks became an important need. Yet, once the computation is outsourced, the job owner loses control, and hence it is crucial to provide guarantees against malicious actions of the contractors involved. One may want to ensure that both the job itself and any inputs to it are hidden from the contractors, while still enabling them to perform the necessary computation. Furthermore, one would check that the computation was carried out correctly. In this paper, we are not concerned with hiding the job or the data, but our main task is to ensure that the job is computed correctly. We also observe that not all contractors are malicious; rather, majority are rational. Thus, our approach brings together elements from cryptography, as well as game theory and mechanism design. We achieve the following results: (1) We incentivize all the rational contractors to perform the outsourced job correctly, (2) we guarantee high fraction (e.g., 99.9 percent) of correct results even in the existence of a relatively large fraction (e.g., 33 percent) of malicious irrational contractors in the system, (3) and we show that our system achieves these while being almost as efficient as running the job locally (e.g., with only 3 percent overhead). Such a high correctness guarantee was not known to be achieved with such efficiency.
In the crowdsourcing setting, Kupcu REF combines cryptography and game theory to incentivize the rational workers to perform the computation correctly, and guarantee the result quality even in the existence of irrational workers who intensionally submit incorrect result.
1410055
Incentivized Outsourced Computation Resistant to Malicious Contractors
{ "venue": "IEEE Transactions on Dependable and Secure Computing", "journal": "IEEE Transactions on Dependable and Secure Computing", "mag_field_of_study": [ "Computer Science" ] }
This paper is concerned with the problem of name disambiguation. By name disambiguation, we mean distinguishing persons with the same name. It is a critical problem in many knowledge management applications. Despite much research work has been conducted, the problem is still not resolved and becomes even more serious, in particular with the popularity of Web 2.0. Previously, name disambiguation was often undertaken in either a supervised or unsupervised fashion. This paper first gives a constraint-based probabilistic model for semi-supervised name disambiguation. Specifically, we focus on investigating the problem in an academic researcher social network (http://arnetminer.org). The framework combines constraints and Euclidean distance learning, and allows the user to refine the disambiguation results. Experimental results on the researcher social network show that the proposed framework significantly outperforms the baseline method using unsupervised hierarchical clustering algorithm.
Duo Zhang, REF proposed a constraint-based probabilistic name disambiguation model using semi supervised learning.
370446
A constraint-based probabilistic framework for name disambiguation
{ "venue": "CIKM '07", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
We introduce the first fully automatic, fully semantic frame based MT evaluation metric, MEANT, that outperforms all other commonly used automatic metrics in correlating with human judgment on translation adequacy. Recent work on HMEANT, which is a human metric, indicates that machine translation can be better evaluated via semantic frames than other evaluation paradigms, requiring only minimal effort from monolingual humans to annotate and align semantic frames in the reference and machine translations. We propose a surprisingly effective Occam's razor automation of HMEANT that combines standard shallow semantic parsing with a simple maximum weighted bipartite matching algorithm for aligning semantic frames. The matching criterion is based on lexical similarity scoring of the semantic role fillers through a simple context vector model which can readily be trained using any publicly available large monolingual corpus. Sentence level correlation analysis, following standard NIST MetricsMATR protocol, shows that this fully automated version of HMEANT achieves significantly higher Kendall correlation with human adequacy judgments than BLEU, NIST, ME-TEOR, PER, CDER, WER, or TER. Furthermore, we demonstrate that performing the semantic frame alignment automatically actually tends to be just as good as performing it manually. Despite its high performance, fully automated MEANT is still able to preserve HMEANT's virtues of simplicity, representational transparency, and inexpensiveness.
For instance, REF use a maximum-weighted bipartite matching algorithm to align predicates with a lexicalsimilarity measure to evaluate semantic-role correspondence.
974899
Fully Automatic Semantic MT Evaluation
{ "venue": "Proceedings of the Seventh Workshop on Statistical Machine Translation", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
Abstract-Providing efficient data aggregation while preserving data privacy is a challenging problem in wireless sensor networks research. In this paper, we present two privacy-preserving data aggregation schemes for additive aggregation functions. The first scheme -Cluster-based Private Data Aggregation (CPDA)-leverages clustering protocol and algebraic properties of polynomials. It has the advantage of incurring less communication overhead. The second scheme -Slice-Mix-AggRegaTe (SMART)-builds on slicing techniques and the associative property of addition. It has the advantage of incurring less computation overhead. The goal of our work is to bridge the gap between collaborative data collection by wireless sensor networks and data privacy. We assess the two schemes by privacy-preservation efficacy, communication overhead, and data aggregation accuracy. We present simulation results of our schemes and compare their performance to a typical data aggregation scheme -TAG, where no data privacy protection is provided. Results show the efficacy and efficiency of our schemes. To the best of our knowledge, this paper is among the first on privacy-preserving data aggregation in wireless sensor networks.
In REF , two privacy-preserving schemes, cluster-based privacy data aggregation (CPDA) and slice-mix-aggregate (S-MART), are proposed for aggregation function sum.
10398325
PDA: Privacy-Preserving Data Aggregation in Wireless Sensor Networks
{ "venue": "IEEE INFOCOM 2007 - 26th IEEE International Conference on Computer Communications", "journal": "IEEE INFOCOM 2007 - 26th IEEE International Conference on Computer Communications", "mag_field_of_study": [ "Computer Science" ] }
In this article, a symbiotic radio (SR) system is proposed to support passive Internet of Things (IoT), in which a backscatter device (BD), also called IoT device, is parasitic in a primary transmission. The primary transmitter (PT) is designed to assist both the primary and BD transmissions, and the primary receiver (PR) is used to decode the information from the PT as well as the BD. The symbol period for BD transmission is assumed to be either equal to or much greater than that of the primary one, resulting in parasitic SR (PSR) or commensal SR (CSR) setup. We consider a basic SR system which consists of three nodes: 1) a multiantenna PT; 2) a single-antenna BD; and 3) a single-antenna PR. We first derive the achievable rates for the primary and BD transmissions for each setup. Then, we formulate two transmit beamforming optimization problems, i.e., the weighted sum-rate maximization (WSRM) problem and the transmit power minimization (TPM) problem, and solve these nonconvex problems by applying the semidefinite relaxation (SDR) technique. In addition, a novel transmit beamforming structure is proposed to reduce the computational complexity of the solutions. The simulation results show that for CSR setup, the proposed solution enables the opportunistic transmission for the BD via energy-efficient passive backscattering without any loss in spectral efficiency, by properly exploiting the additional signal path from the BD.
The term of the ''symbiotic radio'' was firstly presented in REF , in which a backscatter transmission is parasitic with a primary transmission, and the achievable rate tradeoff between the primary and backscatter transmissions was realized by transmit beamforming.
53106802
Symbiotic Radio: A New Communication Paradigm for Passive Internet of Things
{ "venue": "IEEE Internet of Things Journal", "journal": "IEEE Internet of Things Journal", "mag_field_of_study": [ "Computer Science", "Mathematics" ] }
Abstract: We consider the joint admission and power control problem in a dense small cell network, which contains multiple interference links. The goal is to mainly maximize the number of the admitted links, and at the same time minimize the transmit power. We formulate the admission control and power control problem as a joint optimization problem, which is however NP-hard. Such NP-hard problem can be relaxed to a p-norm problem (0<p<1) by using the correntropy induced metric. The correntropy is a novel nonlinear similarity measure, which has been successfully used in the robust and spares signal processing, especially when the data contain large outliers. Thus, in this work we propose a new correntropy induced joint power and admission control algorithm (CJPA). To achieve a faster convergence speed, we also propose an adaptive kernel size method, in which the kernel size is determined by the error so that the convergence speed is the fastest during the iterations. Simulation results show that the proposed approach can achieve much better results than the existing works.
A joint admission and power control scheme was proposed by Luan et al. REF for DSCN, which aims for maximizing the number of the connections admitted, while minimizing the transmission power.
29522889
Correntropy induced joint power and admission control algorithm for dense small cell network
{ "venue": "IET Communications", "journal": "IET Communications", "mag_field_of_study": [ "Computer Science", "Mathematics" ] }
Abstract-We address the problem of cooperative conflict resolution for multi-vehicle motion planning in mixed-traffic scenarios, where automated and manually-driven vehicles coexist. We propose a novel solution based on reachability analysis, which provides the drivable area of each collaborative traffic participant. Overlapping drivable areas are redistributed so that each traffic participant receives an individual area for motion planning. We do not stipulate a specific method for predicting the future motion of non-communicating traffic participants. Furthermore, uncertainties in the initial states of the cooperative vehicles, e.g. due to sensor noise, can be easily integrated. A byproduct of our approach is that collaborative groups can be automatically found by identifying conflicting drivable areas; if no conflict exists, collaboration becomes unnecessary. We demonstrate the redistribution of drivable areas with two numerical examples. Collaborative motion planning of various automated road vehicles is clearly superior in terms of achievable safety and comfort compared to computing individual motion plans. This is because individual motion planning is a special case of collaborative planning when vehicles are not communicating. Many promising approaches for multi-vehicle motion planning have been developed; however, dealing with mixed-traffic situations and uncertainty is still an open research topic. We propose a unified approach for cooperative conflict resolution based on the computation of drivable areas where automated and manually-driven vehicles share the road. We first review literature concerning specific applications like intersection management and merging; after, we discuss priority-based, market-based, and reservation-based approaches. Much work on cooperative motion planning has been devoted to road intersections, since these are hotspots for traffic accidents. Collision avoidance at intersections using V2V-communication for cooperation is investigated in [1] under the consideration of model uncertainty and communication delays. Colombo et al. [2] solve scheduling problems to ensure safety during intersection passages. Another line of research is the design of cooperative lanechanging and merging strategies. In [3] , it is discussed how V2V-communication can be utilized for cooperative decision making: a distributed receding horizon control framework is set up to solve tasks of platooning and cooperative merging. Further lane-changing and merging control algorithms for platoons of vehicles are developed in [4], [5] .
Manzinger and Althof REF develop an algorithm for cooperative collision avoidance by redistributing drivable regions fairly among the cooperating vehicles.
3455895
Negotiation of drivable areas of cooperative vehicles for conflict resolution
{ "venue": "2017 IEEE 20th International Conference on Intelligent Transportation Systems (ITSC)", "journal": "2017 IEEE 20th International Conference on Intelligent Transportation Systems (ITSC)", "mag_field_of_study": [ "Engineering", "Computer Science" ] }
Abstract: This paper proposes a mobile biological sensor system that can assist in early detection of forest fires one of the most dreaded natural disasters on the earth. The main idea presented in this paper is to utilize animals with sensors as Mobile Biological Sensors (MBS). The devices used in this system are animals which are native animals living in forests, sensors (thermo and radiation sensors with GPS features) that measure the temperature and transmit the location of the MBS, access points for wireless communication and a central computer system which classifies of animal actions. The system offers two different methods, firstly: access points continuously receive data about animals' location using GPS at certain time intervals and the gathered data is then classified and checked to see if there is a sudden movement (panic) of the animal groups: this method is called animal behavior classification (ABC). The second method can be defined as thermal detection (TD): the access points get the temperature values from the MBS devices and send the data to a central computer to check for instant changes in the temperatures. This system may be used for many purposes other than fire detection, namely animal tracking, poaching prevention and detecting instantaneous animal death.
Sahin addressed the role of animals as biological sensors in forest fire detection REF .
6546632
Animals as Mobile Biological Sensors for Forest Fire Detection
{ "venue": "Sensors", "journal": "Sensors", "mag_field_of_study": [ "Computer Science", "Medicine" ] }
We analyze the convergence of (stochastic) gradient descent algorithm for learning a convolutional filter with Rectified Linear Unit (ReLU) activation function. Our analysis does not rely on any specific form of the input distribution and our proofs only use the definition of ReLU, in contrast with previous works that are restricted to standard Gaussian input. We show that (stochastic) gradient descent with random initialization can learn the convolutional filter in polynomial time and the convergence rate depends on the smoothness of the input distribution and the closeness of patches. To the best of our knowledge, this is the first recovery guarantee of gradient-based algorithms for convolutional filter on non-Gaussian input distributions. Our theory also justifies the two-stage learning rate strategy in deep neural networks. While our focus is theoretical, we also present experiments that illustrate our theoretical findings. Deep convolutional neural networks (CNN) have achieved the state-of-the-art performance in many applications such as computer vision [Krizhevsky et al., 2012] , natural language processing [Dauphin et al., 2016] and reinforcement learning applied in classic games like Go [Silver et al., 2016]. Despite the highly nonconvex nature of the objective function, simple first-order algorithms like stochastic gradient descent and its variants often train such networks successfully. On the other hand, the success of convolutional neural network remains elusive from an optimization perspective. When the input distribution is not constrained, existing results are mostly negative, such as hardness of learning a 3-node neural network [Blum and Rivest, 1989] or a non-overlap convolutional filter [Brutzkus and Globerson, 2017] . Recently, Shamir [2016] showed learning a simple one-layer fully connected neural network is hard for some specific input distributions. These negative results suggest that, in order to explain the empirical success of SGD for learning neural networks, stronger assumptions on the input distribution are needed.. Recently, a line of research [Tian, 2017 , Brutzkus and Globerson, 2017 , Li and Yuan, 2017 , Soltanolkotabi, 2017 , Zhong et al., 2017 assumed the input distribution be standard Gaussian N (0, I) and showed (stochastic) gradient descent is able to recover neural networks with ReLU activation in polynomial time. * This work is done while the author is at Facebook AI Research.
REF studied the convergence of gradient-based methods for learning a convolutional filter.
3624410
When is a Convolutional Filter Easy To Learn?
{ "venue": "ArXiv", "journal": "ArXiv", "mag_field_of_study": [ "Computer Science", "Mathematics" ] }
A common way to protect privacy of sensitive information is to introduce additional randomness, beyond sampling. Differential Privacy (DP) provides a rigorous framework for quantifying privacy risk of such procedures which allow for data summary releases, such as a statistic T . However in theory and practice, the structure of the statistic T is often not carefully analyzed, resulting in inefficient implementation of DP mechanisms that introduce excessive randomness to minimize the risk, reducing the statistical utility of the result. We introduce the adjacent output space S T of T , and connect S T to the notion of sensitivity, which controls the amount of randomness required to protect privacy. Using S T , we formalize the comparison of K-Norm Mechanisms and derive the optimal one as a function of the adjacent output space. We use these methods to extend the Objective Perturbation and the Functional mechanisms to arbitrary K-Mechanisms, and apply them to Logistic and Linear Regression, respectively, to allow for differentially private releases of statistical results. We compare the performance through simulations, and on a housing price data. Our results demonstrate that the choice of mechanism impacts the utility of the output, and our proposed methodology offers a significant improvement in utility for the same level of risk.
REF derive the optimal mechanism in a similar sense, among the class of K-Norm Mechanisms.
88517593
Structure and Sensitivity in Differential Privacy: Comparing K-Norm Mechanisms
{ "venue": null, "journal": "arXiv: Methodology", "mag_field_of_study": [ "Mathematics" ] }
We have developed a computational method that counts the frequencies of unique k-mers in FASTQformatted genome data and uses this information to infer the genotypes of known variants. FastGT can detect the variants in a 30x genome in less than 1 hour using ordinary low-cost server hardware. The overall concordance with the genotypes of two Illumina "Platinum" genomes is 99.96%, and the concordance with the genotypes of the Illumina HumanOmniExpress is 99.82%. Our method provides k-mer database that can be used for the simultaneous genotyping of approximately 30 million single nucleotide variants (SNVs), including >23,000 SNVs from Y chromosome. The source code of FastGT software is available at GitHub (https://github.com/bioinfo-ut/GenomeTester4/). Next-generation sequencing (NGS) technologies are widely used for studying genome variation. Variants in the human genome are typically detected by mapping sequenced reads and then performing genotype calling [1] [2] [3] [4] . A standard pipeline requires 40-50 h to process a human genome with 30x coverage from raw sequence data to variant calls on a multi-thread server. Mapping and calling are state-of-the-art processes that require expert users familiar with numerous available software options. It is not surprising that different pipelines generate slightly different genotype calls [5] [6] [7] [8] [9] . Fortunately, inconsistent genotype calls are associated with certain genomic regions only [10] [11] [12] , whereas genotyping in the remaining 80-90% of the genome is robust and reliable. The use of k-mers (substrings of length k) in genome analyses has increased because computers can handle large volumes of sequencing data more efficiently. For example, phylogenetic trees of all known bacteria can be easily built using k-mers from their genomic DNA [13] [14] [15] . Bacterial strains can be quickly identified from metagenomic data by searching for strain-specific k-mers [16] [17] [18] . K-mers have also been used to correct sequencing errors in raw reads [19] [20] [21] [22] . One recent publication has described an alignment-free SNV calling method that is based on counting the frequency of k-mers 23 . This method converts sequences from raw reads into Burrows-Wheeler transform and then calls genotypes by counting using a variable-length unique substring surrounding the variant. We developed a new method that offers the possibility of directly genotyping known variants from NGS data by counting unique k-mers. The method only uses reliable regions of the genome and is approximately 1-2 orders of magnitude faster than traditional mapping-based genotype detection. Thus, it is ideally suited for a fast, preliminary analysis of a subset of markers before the full-scale analysis is finished. The method is implemented in the C programming language and is available as the FastGT software package. FastGT is currently limited to the calling of previously known genomic variants because specific k-mers must be pre-selected for all known alleles. Therefore, it is not a substitute for traditional mapping and variant calling but a complementary method that facilitates certain aspects of NGS-based genome analyses. In fact, FastGT is comparable to a large digital microarray that uses NGS data as an input. Our method is based on three original components: (1) the procedure for the selection of unique k-mers, (2) the customized data structure for storing and counting k-mers directly from a FASTQ file, and (3) a maximum likelihood method designed specifically for estimating genotypes from k-mer counts. Compilation of the database of unique k-mer pairs. The crucial component of FastGT is a pre-compiled flat-file database of genomic variants and corresponding k-mer pairs that overlap with each variant. Every bi-allelic single nucleotide variant (SNV) position in the genome is covered by k k-mer pairs, where pair is formed
FastGT REF is yet another k-mer-based method to genotype sequencing data: it strongly relies on a pre-compiled database of bi-allelic SNVs and corresponding k-mers, obtained by subjecting the k-mers that overlap known SNVs to several filtering steps.
3299721
FastGT: an alignment-free method for calling common SNVs directly from raw sequencing reads
{ "venue": "Scientific Reports", "journal": "Scientific Reports", "mag_field_of_study": [ "Biology", "Medicine" ] }
Abstract-In the cloud context, pricing and capacity planning are two important factors to the profit of the infrastructure-as-a-service (IaaS) providers. This paper investigates the problem of joint pricing and capacity planning in the IaaS provider market with a set of software-as-a-service (SaaS) providers, where each SaaS provider leases the virtual machines (VMs) from the IaaS providers to provide cloud-based application services to its end-users. We study two market models, one with a monopoly IaaS provider market, the other with multiple-IaaS-provider market. For the monopoly IaaS provider market, we first study the SaaS providers' optimal decisions in terms of the amount of end-user requests to admit and the number of VMs to lease, given the resource price charged by the IaaS provider. Based on the best responses of the SaaS providers, we then derive the optimal solution to the problem of joint pricing and capacity planning to maximize the IaaS provider's profit. Next, for the market with multiple IaaS providers, we formulate the pricing and capacity planning competition among the IaaS providers as a three-stage Stackelberg game. We explore the existence and uniqueness of Nash equilibrium, and derive the conditions under which there exists a unique Nash equilibrium. Finally, we develop an iterative algorithm to achieve the Nash equilibrium.
Tang and Chen REF proposed a Stackelberg game formulation for the joint pricing and capacity allocation problem in a scenario with multiple IaaS and SaaS providers.
16620900
Joint Pricing and Capacity Planning in the IaaS Cloud Market
{ "venue": "IEEE Transactions on Cloud Computing", "journal": "IEEE Transactions on Cloud Computing", "mag_field_of_study": [ "Computer Science" ] }
Abstract-In graph-based learning models, entities are often represented as vertices in an undirected graph with weighted edges describing the relationships between entities. In many real-world applications, however, entities are often associated with relations of different types and/or from different sources, which can be well captured by multiple undirected graphs over the same set of vertices. How to exploit such multiple sources of information to make better inferences on entities remains an interesting open problem. In this paper, we focus on the problem of clustering the vertices based on multiple graphs in both unsupervised and semi-supervised settings. As one of our contributions, we propose Linked Matrix Factorization (LMF) as a novel way of fusing information from multiple graph sources. In LMF, each graph is approximated by matrix factorization with a graph-specific factor and a factor common to all graphs, where the common factor provides features for all vertices. Experiments on SIAM journal data show that (1) we can improve the clustering accuracy through fusing multiple sources of information with several models, and (2) LMF yields superior or competitive results compared to other graph-based clustering methods.
For its part, REF ) solves a multiple graph clustering problem where each graph is approximated by matrix factorization with a graph-specific factor and a factor common to all graphs.
1608993
Clustering with Multiple Graphs
{ "venue": "2009 Ninth IEEE International Conference on Data Mining", "journal": "2009 Ninth IEEE International Conference on Data Mining", "mag_field_of_study": [ "Computer Science" ] }
Learning transformation invariant representations of visual data is an important problem in computer vision. Deep convolutional networks have demonstrated remarkable results for image and video classification tasks. However, they have achieved only limited success in the classification of images that undergo geometric transformations. In this work we present a novel Transformation Invariant Graph-based Network (TIGraNet), which learns graph-based features that are inherently invariant to isometric transformations such as rotation and translation of input images. In particular, images are represented as signals on graphs, which permits to replace classical convolution and pooling layers in deep networks with graph spectral convolution and dynamic graph pooling layers that together contribute to invariance to isometric transformations. Our experiments show high performance on rotated and translated images from the test set compared to classical architectures that are very sensitive to transformations in the data. The inherent invariance properties of our framework provide key advantages, such as increased resiliency to data variability and sustained performance with limited training sets.
REF learns graph-based features on images that are inherently invariant to isometric transformations.
906304
Graph-based Isometry Invariant Representation Learning
{ "venue": "ArXiv", "journal": "ArXiv", "mag_field_of_study": [ "Computer Science" ] }
Abstract-Existing path planning algorithms are capable of finding physically feasible, shortest, and energy-efficient paths for mobile robots navigating on uneven terrains. However, shortest paths on uneven terrains are often energy inefficient while energy-optimal paths usually take long time to be traversed. Therefore, due to time and energy constraints imposed on mobile robots, these shortest and energy-optimal paths might not be applicable. We propose a multiobjective path planner that can find pareto-optimal solutions in terms of path length and energy consumption. It is based on NAMOA* search algorithm that utilizes a proposed monotone heuristic cost function. The simulation results show that nondominated path options found by the proposed path planner can be very useful in many realworld applications.
The work in REF proposes a multi-objective path finder that can discover Pareto-optimal solutions concerning energy consumption and length of the path.
35989110
Multiobjective path planning on uneven terrains based on NAMOA*
{ "venue": "2016 IEEE International Symposium on Circuits and Systems (ISCAS)", "journal": "2016 IEEE International Symposium on Circuits and Systems (ISCAS)", "mag_field_of_study": [ "Computer Science" ] }
This paper presents an aided dead-reckoning navigation structure and signal processing algorithms for self localization of an autonomous mobile device by fusing pedestrian dead reckoning and WiFi signal strength measurements. WiFi and inertial navigation systems (INS) are used for positioning and attitude determination in a wide range of applications. Over the last few years, a number of low-cost inertial sensors have become available. Although they exhibit large errors, WiFi measurements can be used to correct the drift weakening the navigation based on this technology. On the other hand, INS sensors can interact with the WiFi positioning system as they provide high-accuracy real-time navigation. A structure based on a Kalman filter and a particle filter is proposed. It fuses the heterogeneous information coming from those two independent technologies. Finally, the benefits of the proposed architecture are evaluated and compared with the pure WiFi and INS positioning systems.
Reference REF firstly proposes a particle filter fusing inertial signals and WiFi positioning.
28050779
Advanced Integration of WiFi and Inertial Navigation Systems for Indoor Mobile Positioning
{ "venue": null, "journal": "EURASIP Journal on Advances in Signal Processing", "mag_field_of_study": [ "Computer Science" ] }
Spatiotemporal forecasting has various applications in neuroscience, climate and transportation domain. Traffic forecasting is one canonical example of such learning task. The task is challenging due to (1) complex spatial dependency on road networks, (2) non-linear temporal dynamics with changing road conditions and (3) inherent difficulty of long-term forecasting. To address these challenges, we propose to model the traffic flow as a diffusion process on a directed graph and introduce Diffusion Convolutional Recurrent Neural Network (DCRNN), a deep learning framework for traffic forecasting that incorporates both spatial and temporal dependency in the traffic flow. Specifically, DCRNN captures the spatial dependency using bidirectional random walks on the graph, and the temporal dependency using the encoder-decoder architecture with scheduled sampling. We evaluate the framework on two real-world large scale road network traffic datasets and observe consistent improvement of 12% -15% over state-of-the-art baselines. Published as a conference paper at ICLR 2018 the traffic data. Most recently, deep learning models for traffic forecasting have been developed in Lv et al. (2015); Yu et al. (2017b), but without considering the spatial structure. Wu & Tan (2016) and Ma et al. (2017) model the spatial correlation with Convolutional Neural Networks (CNN), but the spatial structure is in the Euclidean space (e.g., 2D images). Bruna et al. (2014), Defferrard et al. (2016) studied graph convolution, but only for undirected graphs. In this work, we represent the pair-wise spatial correlations between traffic sensors using a directed graph whose nodes are sensors and edge weights denote proximity between the sensor pairs measured by the road network distance. We model the dynamics of the traffic flow as a diffusion process and propose the diffusion convolution operation to capture the spatial dependency. We further propose Diffusion Convolutional Recurrent Neural Network (DCRNN) that integrates diffusion convolution, the sequence to sequence architecture and the scheduled sampling technique. When evaluated on realworld traffic datasets, DCRNN consistently outperforms state-of-the-art traffic forecasting baselines by a large margin. In summary:
The matrix multiplication in GRU is replaced by the diffusion convolution operation on the traffic network at each time step REF .
3508727
Diffusion Convolutional Recurrent Neural Network: Data-Driven Traffic Forecasting
{ "venue": "ICLR", "journal": null, "mag_field_of_study": [ "Mathematics", "Computer Science" ] }
ABSTRACT Wireless sensor networks (WSNs) will be integrated into the future Internet as one of the components of the Internet of Things, and will become globally addressable by any entity connected to the Internet. Despite the great potential of this integration, it also brings new threats, such as the exposure of sensor nodes to attacks originating from the Internet. In this context, lightweight authentication and key agreement protocols must be in place to enable end-to-end secure communication. Recently, Amin et al. proposed a three-factor mutual authentication protocol for WSNs. However, we identified several flaws in their protocol. We found that their protocol suffers from smart card loss attack where the user identity and password can be guessed using offline brute force techniques. Moreover, the protocol suffers from known session-specific temporary information attack, which leads to the disclosure of session keys in other sessions. Furthermore, the protocol is vulnerable to tracking attack and fails to fulfill user untraceability. To address these deficiencies, we present a lightweight and secure user authentication protocol based on the Rabin cryptosystem, which has the characteristic of computational asymmetry. We conduct a formal verification of our proposed protocol using ProVerif in order to demonstrate that our scheme fulfills the required security properties. We also present a comprehensive heuristic security analysis to show that our protocol is secure against all the possible attacks and provides the desired security features. The results we obtained show that our new protocol is a secure and lightweight solution for authentication and key agreement for Internetintegrated WSNs. INDEX TERMS Authentication, biometrics, key management, privacy, Rabin cryptosystem, smart card, wireless sensor networks.
Jiang et al. REF proposed the Rabin cryptosystem based authentication and key agreement protocol.
3343893
Lightweight Three-Factor Authentication and Key Agreement Protocol for Internet-Integrated Wireless Sensor Networks
{ "venue": "IEEE Access", "journal": "IEEE Access", "mag_field_of_study": [ "Computer Science" ] }
Abstract-Dramatic mobile data traffic growth has spurred a dense deployment of small cell base stations (SCBSs). Small cells enhance the spectrum efficiency and thus enlarge the capacity of mobile networks. Although SCBSs consume much less power than macro BSs (MBSs) do, the overall power consumption of a large number of SCBSs is phenomenal. As the energy harvesting technology advances, base stations (BSs) can be powered by green energy to alleviate the on-grid power consumption. For mobile networks with high BS density, traffic load balancing is critical in order to exploit the capacity of SCBSs. To fully utilize harvested energy, it is desirable to incorporate the green energy utilization as a performance metric in traffic load balancing strategies. In this paper, we have proposed a traffic load balancing framework that strives a balance between network utilities, e.g., the average traffic delivery latency, and the green energy utilization. Various properties of the proposed framework have been derived. Leveraging the software-defined radio access network architecture, the proposed scheme is implemented as a virtually distributed algorithm, which significantly reduces the communication overheads between users and BSs. The simulation results show that the proposed traffic load balancing framework enables an adjustable trade-off between the on-grid power consumption and the average traffic delivery latency, and saves a considerable amount of on-grid power, e.g., 30%, at a cost of only a small increase, e.g., 8%, of the average traffic delivery latency. Index Terms-Green communications, HetNet, renewable energy, software-defined radio access networks, traffic load balancing.
From this perspective, in REF , Han and Ansari presented a virtually distributed algorithm named vGALA to reach a trade-off between network utilities and green-energy utilization in software-defined radio access networks powered by hybrid energy sources.
1065426
A traffic load balancing framework for software-defined radio access networks powered by hybrid energy sources
{ "venue": "TNET", "journal": null, "mag_field_of_study": [ "Computer Science" ] }