src
stringlengths
100
132k
tgt
stringlengths
10
710
paper_id
stringlengths
3
9
title
stringlengths
9
254
discipline
dict
A solid design flow must capture designs at well-defined levels of abstraction and proceed toward an efficient implementation. The critical decisions involve the system's architecture, which will execute the computation and communication tasks associated with the design's overall specification. Understanding the application domain is essential to ensure efficient use of the design flow. Today, the design chain lacks adequate support. Most system-level designers use a collection of unlinked tools. The implementation then proceeds with informal techniques that involve numerous human-language interactions that create unnecessary and unwanted iterations among groups of designers in different companies or different divisions. These groups share little understanding of their respective knowledge domains. Developers thus cannot be sure that these tools, linked by manual or empirical translation of intermediate formats, will preserve the design's semantics. This uncertainty often results in errors that are difficult to identify and debug. The move toward programmable platforms shifts the design implementation task toward embedded software design. When embedded software reaches the complexity typical of today's designs, the risk that the software will not function correctly increases dramatically. This risk stems mainly from poor design methodologies and fragile software system architectures, the result of growing functionality over an existing implementation that may be quite old and undocumented. The Metropolis project seeks to develop a unified framework that can cope with these challenges. We designed Metropolis to provide an infrastructure based on a model with precise semantics that remain general enough to support existing computation models 1 and accommodate new ones. This metamodel can support not only functionality capture and analysis, but also architecture description and the mapping of functionality to architectural elements. Metropolis uses a logic language to capture nonfunctional and declarative constraints. Because the model has a precise semantics, it can support several synthesis and formal analysis tools in addition to simulation. The first design activity that Metropolis supports, communication of design intent and results, focuses on the interactions among people working at different abstraction levels and among people working concurrently at the same abstraction level. The metamodel includes constraints that represent in abstract form requirements not yet implemented or assumed to be satisfied by the rest of the system and its environment. Based on a metamodel with formal semantics that developers can use to capture designs, Metropolis provides an environment for complex electronic-system design that supports simulation, formal analysis, and synthesis.
This uncertainty can result in errors that are difficult to identify and debug REF .
206446849
Metropolis: an integrated electronic system design environment
{ "venue": "IEEE Computer", "journal": "IEEE Computer", "mag_field_of_study": [ "Computer Science" ] }
We propose a practical defect prediction approach for companies that do not track defect related data. Specifically, we investigate the applicability of crosscompany (CC) data for building localized defect predictors using static code features. Firstly, we analyze the conditions, where CC data can be used as is. These conditions turn out to be quite few. Then we apply principles of analogy-based learning (i.e. nearest neighbor (NN) filtering) to CC data, in order to fine tune these models for localization. We compare the performance of these models with that of defect predictors learned from within-company (WC) data. As expected, we observe that defect predictors learned from WC data outperform the ones learned from CC data. However, our analyses also yield defect predictors learned from NN-filtered CC data, with performance close to, but still not better than, WC data. Therefore, we perform a final analysis for determining the minimum number of local defect reports in order to learn WC defect predictors. We demonstrate in this paper that the minimum number of data samples required to build effective defect predictors can be quite small and can be collected quickly within a few months. Hence, for companies with no local defect data, we recommend a two-phase approach that allows them to employ the defect prediction process instantaneously. In phase one, companies should use
Turhan et al. REF investigated the applicability of CC data for building localized defect predictors using 10 projects collected from two different companies including NASA and SOFTLAB. And they have proposed a nearest neighbor (NN) filter to select CC data.
10482384
On the relative value of cross-company and within-company data for defect prediction
{ "venue": "Empirical Software Engineering", "journal": "Empirical Software Engineering", "mag_field_of_study": [ "Computer Science" ] }
Abstract. We introduce a generic algorithmic technique and apply it on decision and counting versions of graph coloring. Our approach is based on the following idea: either a graph has nice (from the algorithmic point of view) properties which allow a simple recursive procedure to find the solution fast, or the pathwidth of the graph is small, which in turn can be used to find the solution by dynamic programming. By making use of this technique we obtain the fastest known exact algorithms -running in time O(1.7272 n ) for deciding if a graph is 4-colorable and -running in time O(1.6262 n ) and O(1.9464 n ) for counting the number of k-colorings for k = 3 and 4 respectively.
Fomin et al. showed an algorithm for the graph 4-coloring problem with running time O(1.7272 n ) by using the path decomposition REF .
14896965
Improved Exact Algorithms for Counting 3- and 4-Colorings
{ "venue": "COCOON", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
Various wireless networks have made the ambient radio frequency signals around the world. Wireless information and power transfer (WIPT) enables the devices to recycle energy from these ambient radio frequency signals and process information simultaneously. In this paper, we develop a WIPT protocol in two-way amplify-and-forward relaying channels, where two sources exchange information via an energy harvesting relay node. The relay node collects energy from the received signal and uses it as the transmission power to forward the received signal. We analytically derive the exact expressions of the outage probability, the ergodic capacity and the finite-SNR diversitymultiplexing trade-off (DMT). Furthermore, the tight closedform upper and lower bounds of the outage probability and the ergodic capacity are then developed. Moreover, the impact of the power splitting ratio is also evaluated and analyzed. Finally, we show that compared to the non-cooperative relaying scheme, the proposed protocol is a green solution to offer higher transmission rate and more reliable communication without consuming additional resource.
As in the OWR protocol, the outage probability and ergodic capacity are analyzed for the AF-TWR protocol employing the PS-SWIPT REF .
3812937
Wireless information and power transfer in two-way amplify-and-forward relaying channels
{ "venue": "2014 IEEE Global Conference on Signal and Information Processing (GlobalSIP)", "journal": "2014 IEEE Global Conference on Signal and Information Processing (GlobalSIP)", "mag_field_of_study": [ "Computer Science", "Mathematics" ] }
Recently, very deep convolutional neural networks (CNNs) have shown outstanding performance in object recognition and have also been the first choice for dense classification problems such as semantic segmentation. However, repeated subsampling operations like pooling or convolution striding in deep CNNs lead to a significant decrease in the initial image resolution. Here, we present RefineNet, a generic multi-path refinement network that explicitly exploits all the information available along the down-sampling process to enable high-resolution prediction using long-range residual connections. In this way, the deeper layers that capture high-level semantic features can be directly refined using fine-grained features from earlier convolutions. The individual components of RefineNet employ residual connections following the identity mapping mindset, which allows for effective end-to-end training. Further, we introduce chained residual pooling, which captures rich background context in an efficient manner. We carry out comprehensive experiments and set new stateof-the-art results on seven public datasets. In particular, we achieve an intersection-over-union score of 83.4 on the challenging PASCAL VOC 2012 dataset, which is the best reported result to date.
In REF , the multi-path refinement network is developed to extract all the information available along the down-sampling process to enable high-resolution prediction using long-range residual connections.
5696978
RefineNet: Multi-path Refinement Networks for High-Resolution Semantic Segmentation
{ "venue": "2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)", "journal": "2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)", "mag_field_of_study": [ "Computer Science" ] }
Abstract-Cellular operators are continuously densifying their networks to cope with the ever-increasing capacity demand. Furthermore, an extreme densification phase for cellular networks is foreseen to fulfill the ambitious fifth generation (5G) performance requirements. Network densification improves spectrum utilization and network capacity by shrinking base stations' (BSs) footprints and reusing the same spectrum more frequently over the spatial domain. However, network densification also increases the handover (HO) rate, which may diminish the capacity gains for mobile users due to HO delays. In highly dense 5G cellular networks, HO delays may neutralize or even negate the gains offered by network densification. In this paper, we present an analytical paradigm, based on stochastic geometry, to quantify the effect of HO delay on the average user rate in cellular networks. To this end, we propose a flexible handover scheme to reduce HO delay in case of highly dense cellular networks. This scheme allows skipping the HO procedure with some BSs along users' trajectories. The performance evaluation and testing of this scheme for only single HO skipping shows considerable gains in many practical scenarios.
Arshad et al. REF studied the handover problem in 5G Networks based on stochastic geometry theory.
13152631
Handover management in dense cellular networks: A stochastic geometry approach
{ "venue": "2016 IEEE International Conference on Communications (ICC)", "journal": "2016 IEEE International Conference on Communications (ICC)", "mag_field_of_study": [ "Computer Science" ] }
Bayesian models offer great flexibility for clustering applications-Bayesian nonparametrics can be used for modeling infinite mixtures, and hierarchical Bayesian models can be utilized for sharing clusters across multiple data sets. For the most part, such flexibility is lacking in classical clustering methods such as k-means. In this paper, we revisit the k-means clustering algorithm from a Bayesian nonparametric viewpoint. Inspired by the asymptotic connection between k-means and mixtures of Gaussians, we show that a Gibbs sampling algorithm for the Dirichlet process mixture approaches a hard clustering algorithm in the limit, and further that the resulting algorithm monotonically minimizes an elegant underlying k-meanslike clustering objective that includes a penalty for the number of clusters. We generalize this analysis to the case of clustering multiple data sets through a similar asymptotic argument with the hierarchical Dirichlet process. We also discuss further extensions that highlight the benefits of our analysis: i) a spectral relaxation involving thresholded eigenvectors, and ii) a normalized cut graph clustering algorithm that does not fix the number of clusters in the graph.
In contrast, D-Means and SD-Means derive this capability from a Bayesian nonparametric model, similarly to the DP-Means algorithm REF .
12274243
Revisiting k-means: New Algorithms via Bayesian Nonparametrics
{ "venue": "ArXiv", "journal": "ArXiv", "mag_field_of_study": [ "Mathematics", "Computer Science" ] }
Recently, learning based hashing techniques have attracted broad research interests because they can support efficient storage and retrieval for high-dimensional data such as images, videos, documents, etc. However, a major difficulty of learning to hash lies in handling the discrete constraints imposed on the pursued hash codes, which typically makes hash optimizations very challenging (NPhard in general). In this work, we propose a new supervised hashing framework, where the learning objective is to generate the optimal binary hash codes for linear classification. By introducing an auxiliary variable, we reformulate the objective such that it can be solved substantially efficiently by employing a regularization algorithm. One of the key steps in this algorithm is to solve a regularization sub-problem associated with the NP-hard binary optimization. We show that the sub-problem admits an analytical solution via cyclic coordinate descent. As such, a high-quality discrete solution can eventually be obtained in an efficient computing manner, therefore enabling to tackle massive datasets. We evaluate the proposed approach, dubbed Supervised Discrete Hashing (SDH), on four large image datasets and demonstrate its superiority to the stateof-the-art hashing methods in large-scale image retrieval.
Supervised discrete hashing (SDH) REF leverages one linear regression model to generate optimal binary codes.
11307479
Supervised Discrete Hashing
{ "venue": "2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)", "journal": "2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)", "mag_field_of_study": [ "Computer Science" ] }
Assortment optimization is an important problem that arises in many practical applications such as retailing and online advertising. In an assortment optimization problem, the goal is to select a subset of items that maximizes the expected revenue in the presence of the substitution behavior of consumers specified by a choice model. In this paper, we consider the capacity constrained version of the assortment optimization problem under several choice models including Multinomial logit (MNL), Nested Logit (NL) and the mixture of Multinomial logit (MMNL) models. The goal is to select a revenue maximizing subset of items with total weight or capacity at most a given bound. We present a fully polynomial time approximation scheme (FPTAS) for these models when the number of mixtures or nests is constant. Our FPTAS uses ideas similar to the FPTAS for the knapsack problem. The running time of our algorithm depends exponentially on the number of mixtures in the MMNL model. We show that surprisingly the exponential dependence on the number of mixtures is necessary for any near-optimal algorithm for the MMNL choice model. In particular, we show that there is no algorithm with running time polynomial in the number of items, n and mixtures, K that obtains an approximation better than O(1/K 1−δ ) for any δ > 0 for even the unconstrained assortment optimization over a general MMNL model. Our reduction provides a procedure to construct a natural family of hard benchmark instances for the assortment optimization problem over MMNL that may be of independent interest. These instances are quite analogous to the consideration set based models (Jagabathula and Rusmevichientong (2014)) where the consideration set arises from a graphical model. We also present some special cases of MMNL and NL models where we can obtain an FPTAS with a polynomial dependence on the number of mixtures.
When the choice function is described by a mixture of multinomial logit models, the assortment problem is NP-hard but various integer programming methods and approximation algorithms are known [3, REF 15 ].
325623
Near-Optimal Algorithms for Capacity Constrained Assortment Optimization
{ "venue": "SSRN Electronic Journal", "journal": "SSRN Electronic Journal", "mag_field_of_study": [ "Mathematics" ] }
We address the problem of shoulder-surfing attacks on authentication schemes by proposing IllusionPIN (IPIN), a PIN-based authentication method that operates on touchscreen devices. IPIN uses the technique of hybrid images to blend two keypads with different digit orderings in such a way, that the user who is close to the device is seeing one keypad to enter her PIN, while the attacker who is looking at the device from a bigger distance is seeing only the other keypad. The user's keypad is shuffled in every authentication attempt since the attacker may memorize the spatial arrangement of the pressed digits. To reason about the security of IllusionPIN, we developed an algorithm which is based on human visual perception and estimates the minimum distance from which an observer is unable to interpret the keypad of the user. We tested our estimations with 84 simulated shoulder-surfing attacks from 21 different people. None of the attacks was successful against our estimations. In addition, we estimated the minimum distance from which a camera is unable to capture the visual information from the keypad of the user. Based on our analysis, it seems practically almost impossible for a surveillance camera to capture the PIN of a smartphone user when IPIN is in use.
Athanasios Papadopoulos, et.al., REF states that I-pin uses the technique of hybrid images to blend two keypads with different ordering digits.
206711862
IllusionPIN: Shoulder-Surfing Resistant Authentication Using Hybrid Images
{ "venue": "IEEE Transactions on Information Forensics and Security", "journal": "IEEE Transactions on Information Forensics and Security", "mag_field_of_study": [ "Computer Science" ] }
Abstract-Low-power modes in modern microprocessors rely on low frequencies and low voltages to reduce the energy budget. Nevertheless, manufacturing induced parameter variations can make SRAM cells unreliable producing hard errors at supply voltages below Vccmin. Recent proposals provide a rather low fault-coverage due to the fault coverage/overhead trade-off. We propose a new faulttolerant L1 cache, which combines SRAM and eDRAM cells in L1 data caches to provide 100% SRAM hard-error fault coverage. Results show that, compared to a conventional cache and assuming 50% failure probability at low-power mode, leakage and dynamic energy savings are by 85% and 62%, respectively, with a minimal impact on performance.
In REF , authors propose a hybrid L1 data cache built with SRAM and eDRAM banks.
6244032
Combining RAM technologies for hard-error recovery in L1 data caches working at very-low power modes
{ "venue": "DATE '13", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
In the near future, upcoming communications and storage networks are expected to tolerate major difficulties produced by huge amounts of data being generated from the Internet of Things (IoT). For these types of networks, strategies and mechanisms based on network coding have appeared as an alternative to overcome these difficulties in a holistic manner, e.g., without sacrificing the benefit of a given network metric when improving another. There has been recurrent issues on: (i) making large-scale deployments akin to the Internet of Things; (ii) assessing and (iii) replicating the obtained results in preliminary studies. Therefore, finding testbeds that can deal with large-scale deployments and not lose historic data in order to evaluate these mechanisms are greatly needed and desirable from a research perspective. However, this can be hard to manage, not only due to the inherent costs of the hardware, but also due to maintenance challenges. In this paper, we present the required key steps to design, setup and maintain an inexpensive testbed using Raspberry Pi devices for communications and storage networks with network coding capabilities. This testbed can be utilized for any applications requiring results replicability.
In the paper REF , the authors present the required steps to implement and configure an inexpensive testbed.
8440539
Easy as Pi: A Network Coding Raspberry Pi Testbed
{ "venue": null, "journal": "Electronics", "mag_field_of_study": [ "Engineering" ] }
Abstract-In this paper, for the sake of better global coverage, we introduce a novel triple-layered satellite network architecture including the Geostationary Earth Orbit (GEO), the Highly Elliptical Orbit (HEO), and the Low Earth Orbit (LEO) satellite layers, which provides the near-global coverage with 24 hour uninterrupted over the areas varying from 75° S to 90° N. On the basis of this satellite network architecture, we propose an on-demand QoS multicast routing protocol (ODQMRP) for satellite IP networks using the concept of logical locations to isolate the mobility of LEO and HEO satellites. In ODQMRP, we present two strategies, i.e., the parallel shortest path tree (PSPT) strategy and the least cost tree (LCT) strategy, to create the multicast trees under the condition that the QoS requirements, containing the delay constraint, and the available bandwidth constraint, are guaranteed. The PSPT and LCT strategy minimize the path delay and the path cost of the multicast trees, respectively. Simulation results demonstrate that the performance benefits of the proposed ODQMRP in terms of the end-to-end tree delay, the tree cost, and the failure ratio of multicasting connections by comparison with the conventional non-QoS shortest path tree (SPT) strategy. Index Terms-Satellite networks, multicast routing, quality of service, low earth orbit (LEO), highly elliptical orbit (HEO), geostationary earth orbit (GEO).
Ondemand QoS multicast routing protocol (ODQMRP) for a triple-layered LEO/HEO/GEO satellite network architecture is presented in REF .
1618247
On-Demand QoS Multicast Routing for Triple- Layered LEO/HEO/GEO Satellite IP Networks
{ "venue": "JCM", "journal": "JCM", "mag_field_of_study": [ "Computer Science" ] }
This paper presents a novel approach for automatically generating image descriptions: visual detectors, language models, and multimodal similarity models learnt directly from a dataset of image captions. We use multiple instance learning to train visual detectors for words that commonly occur in captions, including many different parts of speech such as nouns, verbs, and adjectives. The word detector outputs serve as conditional inputs to a maximum-entropy language model. The language model learns from a set of over 400,000 image descriptions to capture the statistics of word usage. We capture global semantics by re-ranking caption candidates using sentence-level features and a deep multimodal similarity model. Our system is state-of-the-art on the official Microsoft COCO benchmark, producing a BLEU-4 score of 29.1%. When human judges compare the system captions to ones written by other people on our heldout test set, the system captions have equal or better quality 34% of the time.
In image captioning research, Fang et al. REF exploit a multiple instance learning (MIL) approach to train visual detectors that identify a set of words with bounding boxed regions of the image.
9254582
From captions to visual concepts and back
{ "venue": "2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)", "journal": "2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)", "mag_field_of_study": [ "Computer Science" ] }
We propose a statistical measure for the degree of acceptability of light verb constructions, such as take a walk, based on their linguistic properties. Our measure shows good correlations with human ratings on unseen test data. Moreover, we find that our measure correlates more strongly when the potential complements of the construction (such as walk, stroll, or run) are separated into semantically similar classes. Our analysis demonstrates the systematic nature of the semi-productivity of these constructions.
A statistical method is applied to measure the acceptability of possible light verb constructions in REF , which correlates reasonably well with human judgments.
6344468
Statistical Measures Of The Semi-Productivity Of Light Verb Constructions
{ "venue": "Workshop On Multiword Expressions: Integrating Processing", "journal": null, "mag_field_of_study": [ "Mathematics" ] }
We introduce an extremely computation-efficient CNN architecture named ShuffleNet, which is designed specially for mobile devices with very limited computing power (e.g.,). The new architecture utilizes two new operations, pointwise group convolution and channel shuffle, to greatly reduce computation cost while maintaining accuracy. Experiments on ImageNet classification and MS COCO object detection demonstrate the superior performance of ShuffleNet over other structures, e.g. lower top-1 error (absolute 7.8%) than recent MobileNet [12] on Ima-geNet classification task, under the computation budget of 40 MFLOPs. On an ARM-based mobile device, ShuffleNet achieves ∼13× actual speedup over AlexNet while maintaining comparable accuracy.
ShuffleNet REF uses pointwise group convolution and channel shuffle operation to reduce FLOPs while maintaining accuracy.
24982157
ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices
{ "venue": "2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition", "journal": "2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition", "mag_field_of_study": [ "Computer Science" ] }
Deep learning achieves state-of-the-art results in many tasks in computer vision and natural language processing. However, recent works have shown that deep networks can be vulnerable to adversarial perturbations which raised a serious robustness issue of deep networks. Adversarial training, typically formulated as a robust optimization problem, is an effective way of improving the robustness of deep networks. A major drawback of existing adversarial training algorithms is the computational overhead of the generation of adversarial examples, typically far greater than that of the network training. This leads to unbearable overall computational cost of adversarial training. In this paper, we show that adversarial training can be cast as a discrete time differential game. Through analyzing the Pontryagin's Maximum Principle (PMP) of the problem, we observe that the adversary update is only coupled with the parameters of the first layer of the network. This inspires us to restrict most of the forward and back propagation within first layer of the network during adversary updates. This effectively reduces the total number of full forward and backward propagation to only one for each group of adversary updates. Therefore, we refer to this algorithm YOPO (You Only Propagate Once). Numerical experiments demonstrate that YOPO can achieve comparable defense accuracy with approximately 1/5 GPU time of the projected gradient descent (PGD) algorithm [16] . 2
YOPO REF finds that an adversary update is majorly coupled with the first layer.
146120969
You Only Propagate Once: Accelerating Adversarial Training via Maximal Principle
{ "venue": "ArXiv", "journal": "ArXiv", "mag_field_of_study": [ "Computer Science", "Mathematics" ] }
Abstract-Kernel-based mean shift (MS) trackers have proven to be a promising alternative to stochastic particle filtering trackers. Despite its popularity, MS trackers have two fundamental drawbacks: (1) The template model can only be built from a single image; (2) It is difficult to adaptively update the template model. In this work we generalize the plain MS trackers and attempt to overcome these two limitations. It is well known that modeling and maintaining a representation of a target object is an important component of a successful visual tracker. However, little work has been done on building a robust template model for kernel-based MS tracking. In contrast to building a template from a single frame, we train a robust object representation model from a large amount of data. Tracking is viewed as a binary classification problem, and a discriminative classification rule is learned to distinguish between the object and background. We adopt a support vector machine (SVM) for training. The tracker is then implemented by maximizing the classification score. An iterative optimization scheme very similar to MS is derived for this purpose. Compared with the plain MS tracker, it is now much easier to incorporate on-line template adaptation to cope with inherent changes during the course of tracking. To this end, a sophisticated on-line support vector machine is used. We demonstrate successful localization and tracking on various data sets.
Shen et al. REF propose a generalized Kernel-based mean shift tracker whose template model can be built from a single image and adaptively updated during tracking.
1223938
Generalized Kernel-based Visual Tracking
{ "venue": "IEEE Transactions on Circuits and Systems for Video Technology, 2010", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
Abstract-In this paper we consider a team of mobile nodes that are in charge of cooperative target tracking. We propose communication-aware navigation functions that allow the nodes to perform their task while maintaining their connectivity to a fixed base station and avoiding obstacles. More specifically, we show how to incorporate measures of link qualities in the navigation functions. We consider both centralized and decentralized scenarios. We furthermore explore the impact of stochastic channels and channel estimation error on the overall performance.
Communication-aware navigation functions are introduced in REF .
1403855
Communication-aware navigation functions for cooperative target tracking
{ "venue": "2009 American Control Conference", "journal": "2009 American Control Conference", "mag_field_of_study": [ "Computer Science" ] }
Hales, T.C., The sphere packing problem, Journal of Computational and Applied Mathematics 44 (1992) 41-76. The sphere packing problem asks whether any packing of spheres of equal radius in three dimensions has density exceeding that of the face-centered-cubic lattice packing (of density IT/V%). This paper sketches a solution to this problem. Keywords: Sphere packing; Delaunay triangulation; packing and covering; spherical geometry; Hilbert's problems; Voronoi cells. We begin with a general discussion of the strategy of the proof that no packing of equal spheres in three dimensions has density exceeding 7~/ m. The density of any packing may be improved by adding spheres as long as there is sufficient room to do so. When there is no longer room to add additional spheres, we say that the packing is saturated. We assume that our packings are saturated. We take our spheres to be of radius 1. Thus in a saturated packing no point of space lies more than distance 2 from a sphere center. The sphere centers are called the packing points. Every saturated packing gives rise to a decomposition of space into simplices called the Delaunay decomposition. The simplices are called Delaunay simplices. Each vertex of a simplex lies at a packing point. Every sphere (abstract sphere, not packing sphere) circumscribing a simplex has the property that none of the packing points lie in the interior of the sphere. In fact, this property is enough to completely determine the Delaunay decomposition except for certain degeneracies. When all of the simplices sharing a common vertex are grouped together, the resulting polytope is called a Delaunay star. Thus each Delaunay star is the union of Delaunay simplices.
When considering three dimensions, the question for optimal node positioning is called the sphere packing problem REF .
17459181
The sphere packing problem
{ "venue": null, "journal": "Journal of Computational and Applied Mathematics", "mag_field_of_study": [ "Mathematics" ] }
Abstract-Delay-tolerant networks (DTNs) provide a promising solution to support wide-ranging applications in the regions where end-to-end network connectivity is not available. In DTNs, the intermediate nodes on a communication path are expected to store, carry, and forward the in-transit messages (or bundles) in an opportunistic way, which is called opportunistic data forwarding. Such a forwarding method depends on the hypothesis that each individual node is ready to forward packets for others. This assumption, however, might easily be violated due to the existence of selfish or even malicious nodes, which may be unwilling to waste their precious wireless resources to serve as bundle relays. To address this problem, we propose a secure multilayer credit-based incentive scheme to stimulate bundle forwarding cooperation among DTN nodes. The proposed scheme can be implemented in a fully distributed manner to thwart various attacks without relying on any tamperproof hardware. In addition, we introduce several efficiency optimization techniques to improve the overall efficiency by exploiting the unique characteristics of DTNs. Extensive simulations demonstrate the efficacy and efficiency of the proposed scheme.
To incentivize nodes for DTNs, Zhu et al. proposed a secure multilayer credit-based incentive scheme, named SMART REF , which provides nodes with virtual coins to charge for and reward the provision of data forwarding.
555139
SMART: A Secure Multilayer Credit-Based Incentive Scheme for Delay-Tolerant Networks
{ "venue": "IEEE Transactions on Vehicular Technology", "journal": "IEEE Transactions on Vehicular Technology", "mag_field_of_study": [ "Computer Science" ] }
While residential broadband Internet access is popular in many parts of the world, only a few studies have examined the characteristics of such traffic. In this paper we describe observations from monitoring the network activity for more than 20,000 residential DSL customers in an urban area. To ensure privacy, all data is immediately anonymized. We augment the anonymized packet traces with information about DSL-level sessions, IP (re-)assignments, and DSL link bandwidth. Our analysis reveals a number of surprises in terms of the mental models we developed from the measurement literature. For example, we find that HTTP-not peer-to-peer-traffic dominates by a significant margin; that more often than not the home user's immediate ISP connectivity contributes more to the round-trip times the user experiences than the WAN portion of the path; and that the DSL lines are frequently not the bottleneck in bulk-transfer performance.
The authors in REF compare the traffic that 20,000 residential DSL lines generated according to the download access bandwidth.
10935192
On dominant characteristics of residential broadband internet traffic
{ "venue": "IMC '09", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
The MLS surface [Levin 2003 ], used for modeling and rendering with point clouds, was originally defined algorithmically as the output of a particular meshless construction. We give a new explicit definition in terms of the critical points of an energy function on lines determined by a vector field. This definition reveals connections to research in computer vision and computational topology. Variants of the MLS surface can be created by varying the vector field and the energy function. As an example, we define a similar surface determined by a cloud of surfels (points equipped with normals), rather than points. We also observe that some procedures described in the literature to take points in space onto the MLS surface fail to do so, and we describe a simple iterative procedure which does.
A different point set surface definition in REF utilizing the critical points of an energy function on lines determined by a vector field is given.
13939126
Defining point-set surfaces
{ "venue": "SIGGRAPH '04", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
Abstract-Hybrid automatic-repeat-request (ARQ) is a flexible and efficient technique for data transmissions. In hybrid ARQ, subpacket schemes are more attractive for systems with burst errors than complete packet schemes. Although subpacket schemes were proposed in ARQ systems, optimum subpacket transmission is more effective to maximize throughput in a dynamic channel. Since convolutional codes have properties of burst errors in decoding, the optimum subpacket can be applied to convolutional codes. This paper investigates the performance of subpacket transmission for convolutionally coded systems. An efficient method is proposed to estimate the optimum number of subpackets, and adaptive subpacket schemes, i.e., schemes that enable a system to employ different optimum numbers of subpackets under various conditions, are suggested to achieve the maximum throughput of the system. Numerical and simulation results show that the adaptive subpacket scheme is very effective for the convolutionally coded hybrid ARQ system, and it can provide higher throughput, smaller delay, and lower dropping rate than complete packet schemes. Moreover, the adaptive subpacket scheme can be flexibly used with packet combining techniques to further improve the system throughput.
The authors of REF try to maximize the throughput by proposing an adaptive subpacket scheme that optimizes the block size.
25519368
Optimum subpacket transmission for hybrid ARQ systems
{ "venue": "IEEE Transactions on Communications", "journal": "IEEE Transactions on Communications", "mag_field_of_study": [ "Computer Science" ] }
Mobile Ad Hoc Networks are vulnerable to a variety of network layer attacks such as black hole, grey hole, sleep deprivation & rushing attacks. In this paper we present an intrusion detection & adaptive response mechanism for MANETs that detects a range of attacks and provides an effective response with low network degradation. We consider the deficiencies of a fixed response to an intrusion; and we overcome these deficiencies with a flexible response scheme that depends on the measured confidence in the attack, the severity of attack and the degradation in network performance. We present results from an implementation of the response scheme that has three intrusion response actions. Simulation results show the effectiveness of the proposed detection and adaptive response mechanisms in various attack scenarios. An analysis of the impact of our proposed scheme shows that it allows a flexible approach to management of threats and demonstrates improved network performance with a low network overhead.
An intrusion detection and adaptive response mechanism (IDAR) was proposed by Nadeem and Howarth REF .
16919868
An intrusion detection & adaptive response mechanism for MANETs
{ "venue": "Ad Hoc Networks", "journal": "Ad Hoc Networks", "mag_field_of_study": [ "Computer Science" ] }
Modern laser range and optical scanners need rendering techniques that can handle millions of points with high resolution textures. This paper describes a point rendering and texture filtering technique called surface splatting which directly renders opaque and transparent surfaces from point clouds without connectivity. It is based on a novel screen space formulation of the Elliptical Weighted Average (EWA) filter. Our rigorous mathematical analysis extends the texture resampling framework of Heckbert to irregularly spaced point samples. To render the points, we develop a surface splat primitive that implements the screen space EWA filter. Moreover, we show how to optimally sample image and procedural textures to irregular point data during pre-processing. We also compare the optimal algorithm with a more efficient view-independent EWA pre-filter. Surface splatting makes the benefits of EWA texture filtering available to point-based rendering. It provides high quality anisotropic texture filtering, hidden surface removal, edge anti-aliasing, and order-independent transparency.
Point-based rendering algorithms typically use reconstruction filters that disguise the appearance of the point representation REF .
3206922
Surface splatting
{ "venue": "SIGGRAPH '01", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
Abstract-Numerous methods that automatically identify subjects depicted in sketches as described by eyewitnesses have been implemented, but their performance often degrades when using real-world forensic sketches and extended galleries that mimic law enforcement mug-shot galleries. Moreover, little work has been done to apply deep learning for face photo-sketch recognition despite its success in numerous application domains including traditional face recognition. This is primarily due to the limited number of sketch images available, which are insufficient to robustly train large networks. This letter aims to tackle these issues with the following contributions: 1) a state-of-the-art model pre-trained for face photo recognition is tuned for face photo-sketch recognition by applying transfer learning, 2) a three-dimensional morphable model is used to synthesise new images and artificially expand the training data, allowing the network to prevent over-fitting and learn better features, 3) multiple synthetic sketches are also used in the testing stage to improve performance, and 4) fusion of the proposed method with a state-of-the-art algorithm is shown to further boost performance. An extensive evaluation of several popular and state-of-the-art algorithms is also performed using publicly available datasets, thereby serving as a benchmark for future algorithms. Compared to a leading method, the proposed framework is shown to reduce the error rate by 80.7% for viewed sketches and lowers the mean retrieval rank by 32.5% for real-world forensic sketches.
Galea and Farrugia REF used a state-of-the-art model that is pretrained for face recognition by applying transfer learning to tackle the problem of forensic sketch recognition.
23362178
Forensic Face Photo-Sketch Recognition Using a Deep Learning-Based Architecture
{ "venue": "IEEE Signal Processing Letters", "journal": "IEEE Signal Processing Letters", "mag_field_of_study": [ "Computer Science" ] }
We present NewsQA, a challenging machine comprehension dataset of over 100,000 human-generated question-answer pairs. Crowdworkers supply questions and answers based on a set of over 10,000 news articles from CNN, with answers consisting of spans of text from the corresponding articles. We collect this dataset through a four-stage process designed to solicit exploratory questions that require reasoning. A thorough analysis confirms that NewsQA demands abilities beyond simple word matching and recognizing textual entailment. We measure human performance on the dataset and compare it to several strong neural models. The performance gap between humans and machines (0.198 in F1) indicates that significant progress can be made on NewsQA through future research. The dataset is freely available at https://datasets.maluuba.com/NewsQA.
NewsQA REF ) is a dataset of newswire texts from CNN with questions and answers written by crowdsourcing workers.
1167588
NewsQA: A Machine Comprehension Dataset
{ "venue": "ArXiv", "journal": "ArXiv", "mag_field_of_study": [ "Computer Science" ] }
Background: Although memory impairment is the main symptom of Alzheimer's disease (AD), language impairment can be an important marker. Relatively few studies of language in AD quantify the impairments in connected speech using computational techniques. Objective: We aim to demonstrate state-of-the-art accuracy in automatically identifying Alzheimer's disease from short narrative samples elicited with a picture description task, and to uncover the salient linguistic factors with a statistical factor analysis. Methods: Data are derived from the DementiaBank corpus, from which 167 patients diagnosed with "possible" or "probable" AD provide 240 narrative samples, and 97 controls provide an additional 233. We compute a number of linguistic variables from the transcripts, and acoustic variables from the associated audio files, and use these variables to train a machine learning classifier to distinguish between participants with AD and healthy controls. To examine the degree of heterogeneity of linguistic impairments in AD, we follow an exploratory factor analysis on these measures of speech and language with an oblique promax rotation, and provide interpretation for the resulting factors. Results: We obtain state-of-the-art classification accuracies of over 81% in distinguishing individuals with AD from those without based on short samples of their language on a picture description task. Four clear factors emerge: semantic impairment, acoustic abnormality, syntactic impairment, and information impairment. Conclusion: Modern machine learning and linguistic analysis will be increasingly useful in assessment and clustering of suspected AD.
A first analysis REF , based on a monologue corpus (DementiaBank), identified four different linguistic factors as main descriptors: syntactic, semantic, and information impairments, and acoustic abnormality.
7357141
Linguistic Features Identify Alzheimer’s Disease in Narrative Speech
{ "venue": "Journal of Alzheimer's disease : JAD", "journal": "Journal of Alzheimer's disease : JAD", "mag_field_of_study": [ "Psychology", "Medicine" ] }
Citation details: Wei,Q., Chen,T., Xu,R. et al. The recognition of disease and chemical named entities in scientific articles is a very important subtask in information extraction in the biomedical domain. Due to the diversity and complexity of disease names, the recognition of named entities of diseases is rather tougher than those of chemical names. Although there are some remarkable chemical named entity recognition systems available online such as ChemSpot and tmChem, the publicly available recognition systems of disease named entities are rare. This article presents a system for disease named entity recognition (DNER) and normalization. First, two separate DNER models are developed. One is based on conditional random fields model with a rule-based post-processing module. The other one is based on the bidirectional recurrent neural networks. Then the named entities recognized by each of the DNER model are fed into a support vector machine classifier for combining results. Finally, each recognized disease named entity is normalized to a medical subject heading disease name by using a vector space model based method. Experimental results show that using 1000 PubMed abstracts for training, our proposed system achieves an F1-measure of 0.8428 at the mention level and 0.7804 at the concept level, respectively, on the testing data of the chemical-disease relation task in BioCreative V.
Wei et al. REF applied an RNN to entity extraction tasks in the medical domain.
6238307
Disease named entity recognition by combining conditional random fields and bidirectional recurrent neural networks
{ "venue": "Database: The Journal of Biological Databases and Curation", "journal": "Database: The Journal of Biological Databases and Curation", "mag_field_of_study": [ "Computer Science", "Medicine" ] }
We present a method for estimating articulated human pose from a single static image based on a graphical model with novel pairwise relations that make adaptive use of local image measurements. More precisely, we specify a graphical model for human pose which exploits the fact the local image measurements can be used both to detect parts (or joints) and also to predict the spatial relationships between them (Image Dependent Pairwise Relations). These spatial relationships are represented by a mixture model. We use Deep Convolutional Neural Networks (DCNNs) to learn conditional probabilities for the presence of parts and their spatial relationships within image patches. Hence our model combines the representational flexibility of graphical models with the efficiency and statistical power of DCNNs. Our method significantly outperforms the state of the art methods on the LSP and FLIC datasets and also performs very well on the Buffy dataset without any training.
Chen et al. REF presented a deep graph model, which exploits deep CNNs to learn conditional probability for the presence of parts and their spatial relationships within these parts.
6619926
Articulated Pose Estimation by a Graphical Model with Image Dependent Pairwise Relations
{ "venue": "ArXiv", "journal": "ArXiv", "mag_field_of_study": [ "Computer Science" ] }
Distributed stochastic gradient descent is an important subroutine in distributed learning. A setting of particular interest is when the clients are mobile devices, where two important concerns are communication efficiency and the privacy of the clients. Several recent works have focused on reducing the communication cost or introducing privacy guarantees, but none of the proposed communication efficient methods are known to be privacy preserving and none of the known privacy mechanisms are known to be communication efficient. To this end, we study algorithms that achieve both communication efficiency and differential privacy. For d variables and n ≈ d clients, the proposed method uses O(log log(nd)) bits of communication per client per coordinate and ensures constant privacy. We also extend and improve previous analysis of the Binomial mechanism showing that it achieves nearly the same utility as the Gaussian mechanism, while requiring fewer representation bits, which can be of independent interest.
Based on DP-SDG , Agarwal et al. REF applies differential privacy on distributed stochastic gradient descent to achieve both communicate efficiency and privacy preserving.
44113205
cpSGD: Communication-efficient and differentially-private distributed SGD
{ "venue": "ArXiv", "journal": "ArXiv", "mag_field_of_study": [ "Computer Science", "Mathematics" ] }
In the last two decades, the continuous increase of computational power has produced an overwhelming flow of data which has called for a paradigm shift in the computing architecture and large-scale data processing mechanisms. MapReduce is a simple and powerful programming model that enables easy development of scalable parallel applications to process vast amounts of data on large clusters of commodity machines. It isolates the application from the details of running a distributed program such as issues on data distribution, scheduling, and fault tolerance. However, the original implementation of the MapReduce framework had some limitations that have been tackled by many research efforts in several followup works after its introduction. This article provides a comprehensive survey for a family of approaches and mechanisms of large-scale data processing mechanisms that have been implemented based on the original idea of the MapReduce framework and are currently gaining a lot of momentum in both research and industrial communities. We also cover a set of introduced systems that have been implemented to provide declarative programming interfaces on top of the MapReduce framework. In addition, we review several large-scale data processing systems that resemble some of the ideas of the MapReduce framework for different purposes and application scenarios. Finally, we discuss some of the future research directions for implementing the next generation of MapReduce-like solutions.
In REF , a comprehensive survey is provided for large scale data processing mechanisms based on MapReduce.
2269759
The family of mapreduce and large-scale data processing systems
{ "venue": "CSUR", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
Binary classifiers are often employed as discriminators in GAN-based unsupervised style transfer systems to ensure that transferred sentences are similar to sentences in the target domain. One difficulty with this approach is that the error signal provided by the discriminator can be unstable and is sometimes insufficient to train the generator to produce fluent language. In this paper, we propose a new technique that uses a target domain language model as the discriminator, providing richer and more stable token-level feedback during the learning process. We train the generator to minimize the negative log likelihood (NLL) of generated sentences, evaluated by the language model. By using a continuous approximation of discrete sampling under the generator, our model can be trained using back-propagation in an end-to-end fashion. Moreover, our empirical results show that when using a language model as a structured discriminator, it is possible to forgo adversarial steps during training, making the process more stable. We compare our model with previous work that uses convolutional networks (CNNs) as discriminators, as well as a broad set of other approaches. Results show that the proposed method achieves improved performance on three tasks: word substitution decipherment, sentiment modification, and related language translation.
Using a target domain language model as a discriminator has also been employed REF , providing richer and more stable token-level feedback during the learning process.
44061800
Unsupervised Text Style Transfer using Language Models as Discriminators
{ "venue": "ArXiv", "journal": "ArXiv", "mag_field_of_study": [ "Computer Science" ] }
Because computers may contain or interact with sensitive information, they are often airgapped and in this way kept isolated and disconnected from the Internet. In recent years the ability of malware to communicate over an air-gap by transmitting sonic and ultrasonic signals from a computer speaker to a nearby receiver has been shown. In order to eliminate such acoustic channels, current best practice recommends the elimination of speakers (internal or external) in secure computers, thereby creating a so-called 'audio-gap'. In this paper, we present Fansmitter, a malware that can acoustically exfiltrate data from airgapped computers, even when audio hardware and speakers are not present. Our method utilizes the noise emitted from the CPU and chassis fans which are present in virtually every computer today. We show that a software can regulate the internal fans' speed in order to control the acoustic waveform emitted from a computer. Binary data can be modulated and transmitted over these audio signals to a remote microphone (e.g., on a nearby mobile phone). We present Fansmitter's design considerations, including acoustic signature analysis, data modulation, and data transmission. We also evaluate the acoustic channel, present our results, and discuss countermeasures. Using our method we successfully transmitted data from air-gapped computer without audio hardware, to a smartphone receiver in the same room. We demonstrated the effective transmission of encryption keys and passwords from a distance of zero to eight meters, with bit rate of up to 900 bits/hour. We show that our method can also be used to leak data from different types of IT equipment, embedded systems, and IoT devices that have no audio hardware, but contain fans of various types and sizes. Air-Gapped computers are kept physically isolated from the Internet or other less secure networks. Such isolation is often enforced when sensitive or confidential data is involved, in order to reduce the risk of data leakage. In this paper we introduce an acoustic channel which doesn't require a speaker or other audio related hardware to be installed in the infected computer. We show that the noise emitted from a computer's internal CPU and chassis cooling fans can be intentionally controlled by software. For example, a malicious code on a contaminated computer can intentionally regulate the speed of a computer's cooling fans to indirectly control its acoustic waveform. In this way, sensitive data (e.g., encryption keys and passwords) can be modulated and transmitted over the acoustic channel. These signals can then be received by a remote microphone (e.g., via a nearby smartphone), and be decoded and sent to an attacker. Our new method is applicable for a variety of computers and devices equipped with internal fans, including devices such as servers, printers, industrial and legacy systems, and Internet of Things (IoT) devices. Covert channels have been widely discussed in professional literature [20] [21] [22] . Our work focuses on covert channels that can exfiltrate data from air-gapped computers without requiring network connectivity. Over the years different types of out-of-band covert channels have been proposed, aimed at bridging air-gap isolation. The proposed methods can be categorized into electromagnetic, optic, thermal, and acoustic covert channels. Electromagnetic emissions are probably the oldest type of covert channels that have been explored academically. Kuhn and Anderson [7], in a pioneer works in this field, discuss hidden data transmission using electromagnetic emissions from a video card. 2.1. Acoustical methods Madhavapeddy et al [15] discuss 'audio networking,' which allows data transmission between a pair of desktop computers, using 10 dollar speakers and a microphone. In 2013, Hanspach and Goetz [33] extend a method for near-ultrasonic covert networking between air-gapped computers using speakers and microphones. They create a mesh network and use it to implement an air-gapped key-logger to demonstrate the covert channel. The concept of communicating over inaudible sounds has been comprehensively examined by Lee et al [13] , and has also been extended for different scenarios using laptops and smartphones [34] . Table 1 summarizes the different types of covert channels for air-gapped computers, including our Fansmitter method. As can be seen, existing acoustic methods require the installation of an external or internal speaker in the transmitting computer. This is considered a restrictive demand, because in many cases, speakers are forbidden on air-gapped computers based on regulations and security practices [17] . On the other hand, with our method the transmitting computer does not need to be equipped with audio hardware or an internal or external speaker. Attack Model
In 2016, Guri et al introduced Fansmitter, a malware which facilitates the exfiltration of data from an air-gapped computer via noise intentionally emitted from the PC fans REF .
12421514
Fansmitter: Acoustic Data Exfiltration from (Speakerless) Air-Gapped Computers
{ "venue": "ArXiv", "journal": "ArXiv", "mag_field_of_study": [ "Computer Science" ] }
This paper proposes a utility-based resource allocation algorithm for the uplink OFDMA Inter-cell Interference (ICI) limited cooperative relay network. Full channel state information (CSI) is assumed to be available at the resource controller at initial stage, then the work is extended to consider more realistic assumption, i.e., only partial channel state information (PCSI) is available. The proposed algorithm aims to maximize the total system utility while simultaneously satisfying the individual user's minimum data rate requirements. In the proposed algorithm, relay selection is initially performed based on the consideration of ICI. Then, subcarrier allocation is performed to achieve maximum utility assuming equal power allocation. Finally, based on the amount of ICI, a modified water-filling power distribution algorithm is proposed and used to optimize the per-carrier power allocation across the allocated set of subcarriers. The results show that, compared to conventional algorithms, the proposed algorithm significantly improves system performance in terms of total sum data rate, outage probability and fairness.
In a similar manner, REF proposed a utility-based resource allocation algorithm for the uplink OFDMA Inter-cell Interference (ICI) limited cooperative relay network.
39251103
Utility-based resource allocation for interference limited OFDMA cooperative relay networks
{ "venue": "Phys. Commun.", "journal": "Phys. Commun.", "mag_field_of_study": [ "Computer Science" ] }
Abstract. Organizations which provide electronic services do not have a logically structured strategy for implementing Customer Knowledge Management through Social media (SCKM). By assessing the position of SCKM, organizations can have a clear understanding of their maturity level and find their future investment interests. This research examined the maturity assessment of SCKM utilizing a fuzzy expert system. It consisted of a-four-stage procedure. The maturity model is based on 11 critical success factors, including strategy, leadership, information technology, knowledge management, culture, process, resources, business intelligence, security, social customer, and assessment. Results showed that the studied organization has covered 48.2% of maturity on the first level and 51.8% on the second level. Thus, to increase productivity, it is indispensable for organizations to act in a targeted way. The fuzzy expert system is not designed specifically for a case study, but can be utilized as a reference for in-depth analysis of the organizational readiness for SCKM implementation and development within organizations, which provide e-services applications.
In addition, REF presents a SCKM maturity model based on the distribution of critical success factors.
55426828
Maturity assessment of social customer knowledge management (SCKM) using fuzzy expert system
{ "venue": null, "journal": "Journal of Business Economics and Management", "mag_field_of_study": [ "Economics" ] }
Abstracr-Mirror sites enable client requests to be serviced by any of a number of servers, reducing load at individual servers and dispersing network load. Typically, a client requests service from a single mirror site. We consider enabling a client to access a file from multiple mirror sites in parallel to speed up the download. To eliminate complex client-server negotiations that a straightforward implementation of this approach would require, we develop a feedback-free protocol based on erasure codes. We demonstrate that a protocol using fast Tornado codes can deliver dramatic speedups at the expense of transmitting a moderate number of additional packets into the network. Our scalable solution extends naturally to allow multiple clients to access data from multiple mirror sites simultaneously. Our approach applies naturally to wireless networks and satellite networks as well.
For example, the authors in REF use Tornado codes to download data simultaneously from multiple mirror sites.
1757484
Accessing multiple mirror sites in parallel: using Tornado codes to speed up downloads
{ "venue": "IEEE INFOCOM '99. Conference on Computer Communications. Proceedings. Eighteenth Annual Joint Conference of the IEEE Computer and Communications Societies. The Future is Now (Cat. No.99CH36320)", "journal": "IEEE INFOCOM '99. Conference on Computer Communications. Proceedings. Eighteenth Annual Joint Conference of the IEEE Computer and Communications Societies. The Future is Now (Cat. No.99CH36320)", "mag_field_of_study": [ "Computer Science" ] }
Abstract. Recently, large breakthroughs have been observed in saliency modeling. The top scores on saliency benchmarks have become dominated by neural network models of saliency, and some evaluation scores have begun to saturate. Large jumps in performance relative to previous models can be found across datasets, image types, and evaluation metrics. Have saliency models begun to converge on human performance? In this paper, we re-examine the current state-of-the-art using a finegrained analysis on image types, individual images, and image regions. Using experiments to gather annotations for high-density regions of human eye fixations on images in two established saliency datasets, MIT300 and CAT2000, we quantify up to 60% of the remaining errors of saliency models. We argue that to continue to approach human-level performance, saliency models will need to discover higher-level concepts in images: text, objects of gaze and action, locations of motion, and expected locations of people in images. Moreover, they will need to reason about the relative importance of image regions, such as focusing on the most important person in the room or the most informative sign on the road. More accurately tracking performance will require finer-grained evaluations and metrics. Pushing performance further will require higher-level image understanding.
Furthermore, REF showed how gaze prediction can improve state-of-the-art saliency models.
1178886
Where Should Saliency Models Look Next
{ "venue": "ECCV", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
We consider how relatively simple extensions of popular channelaware schedulers can be used to multicast scalable video streams in high speed radio access networks. To support the evaluation, we first describe a model of the channel distortion of scalable video coding and validate it using eight commonly used test sequences. We use the distortion model in a detailed simulation setup to compare the performance of six schedulers, among them the Max-Sum and Max-Prod schedulers, which aim to maximize the sum and the product of streaming utilities, respectively. We investigate how the traffic load, user mobility, layering structure, and users' aversion of fluctuating distortion influence the streaming performance. Our results show that the Max-Sum scheduler performs better than other considered schemes in almost all scenarios. With the Max-Sum scheduler, the gain of scalable video coding compared to non-scalable coding is substantial, even when users do not tolerate frequent changes in video quality.
The authors in REF aimed at the maximization of the sum and the product of streaming utilities.
15635271
Multicast scheduling for scalable video streaming in wireless networks
{ "venue": "MMSys '10", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
Abstract-The widespread success of sampling-based planning algorithms stems from their ability to rapidly discover the connectivity of a configuration space. Past research has found that non-uniform sampling in the configuration space can significantly outperform uniform sampling; one important strategy is to bias the sampling distribution based on features present in the underlying workspace. In this paper, we unite several previous approaches to workspace biasing into a general framework for automatically discovering useful sampling distributions. We present a novel algorithm, based on the REINFORCE family of stochastic policy gradient algorithms, which automatically discovers a locally-optimal weighting of workspace features to produce a distribution which performs well for a given class of sampling-based motion planning queries. We present as well a novel set of workspace features that our adaptive algorithm can leverage for improved configuration space sampling. Experimental results show our algorithm to be effective across a variety of robotic platforms and highdimensional configuration spaces.
In sampling-based planning, one approach biases sampling during a planning query based on previous experience REF .
9929417
Adaptive workspace biasing for sampling-based planners
{ "venue": "2008 IEEE International Conference on Robotics and Automation", "journal": "2008 IEEE International Conference on Robotics and Automation", "mag_field_of_study": [ "Computer Science" ] }
The Congested Clique is a distributed-computing model for single-hop networks with restricted bandwidth that has been very intensively studied recently. It models a network by an n-vertex graph in which any pair of vertices can communicate one with another by transmitting O(log n) bits in each round. Various problems have been studied in this setting, but for some of them the best-known results are those for general networks. For other problems, the results for Congested Cliques are better than on general networks, but still incure significant dependency on the number of vertices n. Hence the performance of these algorithms may become poor on large cliques, even though their diameter is just 1. In this paper we devise significantly improved algorithms for various symmetry-breaking problems, such as forests-decompositions, vertex-colorings, and maximal independent set. We analyze the running time of our algorithms as a function of the arboricity a of a clique subgraph that is given as input. The arboricity is always smaller than the number of vertices n in the subgraph, and for many families of graphs it is significantly smaller. In particular, trees, planar graphs, graphs with constant genus, and many other graphs have bounded arboricity, but unbounded size. We obtain O(a)-forest-decomposition algorithm with O(log a) time that improves the previously-known O(log n) time, O(a 2+ǫ )-coloring in O(log * n) time that improves upon an O(log n)-time algorithm, O(a)-coloring in O(a ǫ )-time that improves upon several previous algorithms, and a maximal independent set algorithm with O( √ a) time that improves at least quadratically upon the state-of-the-art for small and moderate values of a. Those results are achieved using several techniques. First, we produce a forest decomposition with a helpful structure called H-partition within O(log a) rounds. In general graphs this structure requires Θ(log n) time, but in Congested Cliques we are able to compute it faster. We employ this structure in conjunction with partitioning techniques that allow us to solve various symmetry-breaking problems efficiently.
Barenboim and Khazanov REF presented deterministic local algorithms as a function of the graph's arboricity.
3390764
Distributed Symmetry-Breaking Algorithms for Congested Cliques
{ "venue": "ArXiv", "journal": "ArXiv", "mag_field_of_study": [ "Computer Science" ] }
We report characteristics of in-text citations in over five million full text articles from two large databases -the PubMed Central Open Access subset and Elsevier journals -as functions of time, textual progression, and scientific field. The purpose of this study is to understand the characteristics of in-text citations in a detailed way prior to pursuing other studies focused on answering more substantive research questions. As such, we have analyzed in-text citations in several ways and report many findings here. Perhaps most significantly, we find that there are large field-level differences that are reflected in position within the text, citation interval (or reference age), and citation counts of references. In general, the fields of Biomedical and Health Sciences, Life and Earth Sciences, and Physical Sciences and Engineering have similar reference distributions, although they vary in their specifics. The two remaining fields, Mathematics and Computer Science and Social Science and Humanities, have different reference distributions from the other three fields and between themselves. We also show that in all fields the numbers of sentences, references, and in-text mentions per article have increased over time, and that there are field-level and temporal differences in the numbers of in-text mentions per reference. A final finding is that references mentioned only once tend to be much more highly cited than those mentioned multiple times.
REF have analysed the in-text citation characteristics in the larger dataset of Elsevier full-text journal articles and PubMed Open Access Subset articles.
2817038
Characterizing in-text citations in scientific articles: A large-scale analysis
{ "venue": "ArXiv", "journal": "ArXiv", "mag_field_of_study": [ "Computer Science" ] }
Recent work on higher-dimensional type theory has explored connections between Martin-Löf type theory, higher-dimensional category theory, and homotopy theory. These connections suggest a generalization of dependent type theory to account for computationally relevant proofs of propositional equality-for example, taking Id Set A B to be the isomorphisms between A and B. The crucial observation is that all of the familiar type and term constructors can be equipped with a functorial action that describes how they preserve such proofs. The key benefit of higher-dimensional type theory is that programmers and mathematicians may work up to isomorphism and higher equivalence, such as equivalence of categories. In this paper, we consider a further generalization of higher-dimensional type theory, which associates each type with a directed notion of transformation between its elements. Directed type theory accounts for phenomena not expressible in symmetric higher-dimensional type theory, such as a universe set of sets and functions, and a type Ctx used in functorial abstract syntax. Our formulation requires two main ingredients: First, the types themselves must be reinterpreted to take account of variance; for example, a Π type is contravariant in its domain, but covariant in its range. Second, whereas in symmetric type theory proofs of equivalence can be internalized using the Martin-Löf identity type, in directed type theory the two-dimensional structure must be made explicit at the judgemental level. We describe a 2-dimensional directed type theory, or 2DTT, which is validated by an interpretation into the strict 2-category Cat of categories, functors, and natural transformations. We also discuss applications of 2DTT for programming with abstract syntax, generalizing the functorial approach to syntax to the dependently typed and mixed-variance case.
Our presentation here is based on our previous work on 2DTT REF , a 2-dimensional directed type theory, which generalizes equivalence to an asymmetric notion of transformation.
16971580
2-Dimensional Directed Type Theory
{ "venue": "MFPS", "journal": "Electr. Notes Theor. Comput. Sci.", "mag_field_of_study": [ "Computer Science", "Mathematics" ] }
We analyze the entire publication database of the American Physical Society generating longitudinal (50 years) citation networks geolocalized at the level of single urban areas. We define the knowledge diffusion proxy, and scientific production ranking algorithms to capture the spatiotemporal dynamics of Physics knowledge worldwide. By using the knowledge diffusion proxy we identify the key cities in the production and consumption of knowledge in Physics as a function of time. The results from the scientific production ranking algorithm allow us to characterize the top cities for scholarly research in Physics. Although we focus on a single dataset concerning a specific field, the methodology presented here opens the path to comparative studies of the dynamics of knowledge across disciplines and research areas. Over the last decade, the digitalization of publication datasets has propelled bibliographic studies allowing for the first time access to the geospatial distribution of millions of publications, and citations at different granularities [1, 2, 3, 4, 5, 6, 7, 8] (see [9] for a review). More precisely, authors' name, affiliations, addresses, and references can be aggregated at different scales, and used to characterize publications and citations patterns of single papers [10, 11] , journals [12, 13] , authors [14, 15, 16] , institutions [17] , cities [18] , or countries [19] . The sheer size of the datasets allows also system level analysis on research production and consumption [20] , migration of authors [21, 22] , and change in production in several regions of the world as a function of time [5, 6] , just to name a few examples. At the same time those analyses have spurred an intense research activity aimed at defining metrics able to capture the importance/ranking of authors, institutions, or even entire countries [23, 24, 14, 15, 17, 25, 26, 27, 28, 29] . Whereas such large datasets are extremely useful in understanding scholarly networks and in charting the creation of knowledge, they are also pointing out the limits of our conceptual and modeling frameworks [30] and call for a deeper understanding of the dynamics ruling the diffusion and fruition of knowledge across the the social and geographical space. In this paper we study citation patterns of articles published in the American Physical Society (APS) journals in a fifty-year time interval [31] . Although in the early years of this period the dataset was obviously biased toward the scholarly activity within the USA, in the last twenty years only about 35% of the papers are produced in the USA. The same amount of production has been observed in databases that * To whom correspondence should be addressed; email: a. include multiple journals, and disciplines [19, 7] . Indeed the journals of the APS are considered worldwide as reference publication venues that well represent the international research activity in Physics. Furthermore this dataset does not bundle different disciplines and publication languages, providing a homogeneous dataset concerning Physics scholarly research. For each paper we geolocalize the institutions contained in the authors' affiliations. In this way we are able to associate each paper in the database with specific urban areas. This defines a time resolved, geolocalized citation network including 2,307 cities around the world engaged in the production of scholarly work in the area of Physics. Following previous works [17, 8] we assume that the number of given or received citations is a proxy of knowledge consumption or production, respectively. More precisely, we assume that citations are the currency traded between parties in the knowledge exchange. Nodes that receive citations export their knowledge to others. Nodes that cite other works, import knowledge from others. According to this assumption we classify nodes considering the unbalance in their trade. Knowledge producers are nodes that are cited (export) more than they cite, (import). On the contrary, we label as consumers nodes that cite (import) more than they are cited (export). Using this classification, we define the knowledge diffusion proxy algorithm to explore how scientific knowledge flows from producers to consumers. This tool explicitly assumes a systemic perspective of knowledge diffusion, highlighting the global structure of scientific production and consumption in Physics. The temporal analysis reveals interesting patterns and the progressive delocalization of knowledge producers. In particular, we find that in the last twenty years the geographical distribution of knowledge production has drastically changed. A paramount example is the transition in the USA from a knowledge production localized around major urban areas in the east and west coast to a broad geographical distribution where a significant part of the knowledge production is now occurring also in the midwestern and southern states in USA. Analogously, we observe the early 90s dominance of UK and Northern Europe to subside to an increase of production from France, Italy and several regions of Spain. Interestingly, the last decade shows that several of China's urban areas are emerging as the largest knowledge consumers worldwide. The reasons underlying this phenomenon may be related to the significant growth of the economy and the research/development compartment in China in the early 21 th century [32]. This positive stimulus, pushed up also the scientific consumption with a large number of paper citing work from other world areas. Indeed, the increase of publications is associated to an increase of the citations unbalance, moving China to the top rank as consumers since the recent influx of its new papers has not yet had the time to accumulate citations. Although the knowledge diffusion proxy provides a measure of knowledge production and consumption, it may be inadequate in providing a rank of the most authoritative cities for Physics research. Indeed, a key issue in appropriately ranking the knowledge production, is that not all citations have the same weight. Citations coming from authoritative nodes are heavier than others coming from less important nodes, thus defining a recursive diffusion of ranking of nodes in the citation network. In order to include this element in the ranking of cities we propose the scientific production ranking algorithm. This tool, inspired by the PageRank [33] , allows us to define the rank of each node, as function of time, going beyond the knowledge diffusion proxy or simple local measures as citation counts or h-index [14] . In this algorithm the importance of each node diffuses through the citation links. The rank of a node is determined by the rank of the nodes that cite it, recursively, thus implicitly weighting differently citations from highly (lowly) ranked nodes. Also in this case we observe noticeable changes in the ranking of cities along the years. For instance the presence of both European and Asian cities in the top 100 list increases by 50% in the last 20 years. This findings suggest that the Internet, digitalization and accessibility of publications are creating a more levelled playing field where the dominance of specific area of the world is being progressively eroded to the advan-2 tage of a more widespread and complex knowledge production and consumption dynamic. We focus our analysis on the APS dataset [31] . It contains all the papers published by the APS from 1893 to 2009. We consider only the last 50 years due to the incomplete geolocalization information available for the early years. During this period, the large majority of indexed papers, 97.47%, contain complete information such as authors name, journal of publication, day of publication, list of affiliations and list of citations to other articles published in APS journals. We geolocalized 96.97% of papers at urban area level with an accuracy of 98.5%. We refer the reader to the Methods section and to the Supplementary Information (SI) for the detailed description of the dataset and the techniques developed to geolocalize the affiliations. In total, only 43% of papers has been produced inside the USA. Interestingly, over time this fraction has decreased. For example, in the 60's it was 85.59%, while in the last 10 years decreased to just 36.67%. While one might assume that the APS dataset is biased toward the USA scientific community, the percentage of publications contributed by the USA in APS journals after 1990 is almost the same as in other publication datasets [19, 7] . These alternative datasets contain journals published all over the world and mix different scientific disciplines. This supports the idea that the APS journals are now attracting the worldwide physics scientific community independently of nationality, and fairly represent the world production and consumption of Physics. It is not possible to provide quantitative analysis of possible nationality bias and disentangle it by an actual change of the dynamic of knowledge production. For this reason, and in order to minimize any bias in the analysis we focus our analysis in the last 20 years of data. In order to construct the geolocalized citation network we consider nodes (urban areas) and directed links representing the presence of citations from a paper with affiliation in one urban area to a paper with affiliation in another urban area. For example, if a paper written in node i cites one paper written in node j there is an link from i to j, i.e., j receives a citation from i and i sends a citation to j. Each paper may have multiple affiliations and therefore citations have to be proportionally distributed between all the nodes of the papers. For this reason we weight each link in order to take into account the presence of multiple affiliations and multiple citations. In a given time window, the total number of citations for papers written in j received from papers written in i, is the weight of the link i → j, and the total number of citations for those paper written in j sent to the papers written in k is the weight of the link j → k. For instance, if in a time window t, there is one paper written in node j, which cite two papers written in node k and was cited by three papers written in node i, then w jk = 2, w ij = 3, and we add all such weights for each paper written in that node j and obtain the weights for links. For papers written in multiple cities, say j 1 , j 2 , the weight will be counted equally. The time window we use in this manuscript is one year. We show an example of the network construction in Figure ( In order to define main actors in the production and consumption of Physics, we consider citations as a currency of trade. This analogy allows us to immediately grasp the meaning and distinction between producers and consumers of scientific knowledge. Nodes that receive citations export their knowledge to the citing nodes. Instead, nodes that cite, papers produced from other nodes of the network, import knowledge from the cited nodes. Measuring the unbalance trade between citations, we define producers as cities that export more than they import, and consumers as cities that import more than they export. More precisely, we can measure the total knowledge imported by each urban area as j w ij and the total export as j w ji in a given year. Those measures however acquire specific meaning when considered relatively to the total trade of physics knowledge worldwide in the same year; i.e. the total number of citations worldwide S = ij w ij . The relative trade unbalance of each urban area i is then: A negative or positive value of this quantity indicates if the urban area i is consumer or producer, respectively. In Figure ( 2)-A we show the worldwide geographical distribution of producer (red) and consumer (blue) urban areas for the 1990 and 2009. Interestingly, during the 90s the production of Physics knowledge was highly localized in a few cities in the eastern and western coasts of the USA and in a few areas of Great Britain and Northern Europe. In 2009 the picture is completely different with many producer cities in central and southern parts of the USA, Europe and Japan. It is interesting to note that despite the fraction of papers produced in the USA is generally decreasing or stable, many more cities in the USA acquire the status of knowledge producers. This implies that the quality of knowledge production from the USA is increasing and thus attracting more citations. This makes it clear that the knowledge produced by an urban area can not be considered to be measured only by the raw number of papers. Citations are a more appropriate proxy that encodes the value of the products. They serve as an approximation of the actual flow of knowledge. The Figure ( 2)-A also makes it clear that cities in China are playing the role of major consumers in both 1990 and 2009. We also observe that cities in other countries like Russia and India consumed less in 2009 than 1990. In other words, in 2009 both the production and consumption of knowledge are less concentrated on specific places and generally spread more evenly geographically. In order to provide visual support to 4 this conclusion we show in Figure ( 2)-B the geographical distribution of producers and consumers inside the USA. From the two maps it is evident the drift of knowledge production from the two coastal areas in the USA to the midwest, central and southern states. Similarly, in Figure ( 2)-C we plot the same information for western Europe. In 1990 only a few urban areas in Germany and France were clearly producers. By 2009 this dominance has been consistently eroded by Italy, Spain and a more widespread geographical distribution of producers in France, Germany and UK. Knowledge diffusion proxy. The definition of producers and consumers is based on a local measure, that does not allow to capture all possible correlations and bounds between nodes that are not directly connected. This might result in a partial view and description of the system, especially when connectivity patterns are complex [36, 37, 38, 39, 40] . Interestingly, a close analysis of each citation network, see Figure ( 3), clearly shows that citation patterns have indeed all the hallmarks of complex systems [36, 37, 38, 39, 40] , especially in the last two decades. The system is self-organized, there is not a central authority that assigns citations and papers to cities, there is not a blueprint of system's interactions, and as clearly shown from Figure ( 3)-C the statistical characteristics of the system are described by heavy-tailed distributions [36, 37, 38, 39, 40] . Not surprisingly, the level of complexity of the system has increased with time. In Figure ( 3)-A we plot the most statistically significant connections of the citation network between cities inside USA in 1960, 1990 and 2009. We filter links by using the backbone extraction algorithm [41] which preserves the relevant connections of weighted networks while removing the least statistically significant ones. We visualize each filtered network by using a bundled representation of links [42] . The direction of each weighted link goes from blue (citing) to red (cited). Similarly, in Figure ( 3)-B, we visualize the most significant links between cities in Europe (European Union's 27 countries, as well as Switzerland and Norway). It is clear from Figure ( 3)-A that in 1960 the citation patterns inside the USA were limited to a few cities, and in Europe only a few cities were connected. Instead, in 1990 and 2009 we register an increase in the interactions among a larger number of cities. The observed temporal trend is well known and valid not just for Physics [43] . Among many factors that have been advocated to explain this tendency we find the increase of the research system and the advance in technology that make collaboration and publishing easier [44, 45, 46, 20] . In order to explicitly consider the complex flow of citations between producers and consumers, we propose the knowledge diffusion proxy algorithm (see Methods section for the formal definition). In this algorithm, producers inject citations in the system that flow along the edges of the network to finally reach consumer cities where the injected citations are finally absorbed. The algorithm allows charting the diffusion of knowledge, going beyond local measures. The entire topology of the networks is explored uncovering nontrivial correlations induced by global citation patterns. For instance, knowledge produced in a city may be consumed by another producer that in turn produces knowledge for other cities who are consumers. This points out that the actual consumer of knowledge is not just signalled by the unbalance of citations but in the overall topology of the production and consumption of knowledge in the whole network. Indeed, the final consumer of each injected citation may not be directly connected with the producer. Citations flow along all possible paths, sometimes through intermediate cities. In Table (1), and Table ( 2) we report the rankings of Top 10 final consumers evaluated by the knowledge diffusion proxy for the Top 3 producers in 2009 and 1990 respectively. We also list the Top 10 neighbours according to the local citation unbalance. From these two tables, it is clear that the final rank of each consumer, obtained by our algorithm, can be extremely different from the ranking obtained by just considering local unbalances. For instance, in 2009 Bratislava and Mainz 5 rank in top 10 consumers absorbing knowledge produced in Boston. However, according to local measure of unbalance, these two cities are ranked out of top 10 (shown in bold in Table (1)) . Interestingly, even the Top consumer for New Haven, Berlin, also does not rank among the Top 10 neighbours according to the citation unbalance. These findings confirm that in order to uncover the complex set of relationships among cities, it is crucial to consider the entire structure of the network, going beyond simple local measures. In Figure ( The size of each circle is proportional to how many times each injected citation is absorbed by that consumer. In the plot, vertical grey strips indicate that the city was not a producer during those years (e.g. Orsay in 2008). The results show that, on average, Beijing is the top consumer for all of these producers in the past 20 years. Since China registered a big economical growth and increment of research population in the early 2000, it is reasonable to assume that, thanks to this positive stimulus, many more papers were written in its capital, a dominant city for scientific research in China. However, the fast publication growth increased the unbalance between sent and received citations. Each paper published in a given city imports knowledge from the cited cities. Reaching a balance might require some time. Each city needs to accumulate citations 6 back to export its knowledge to others cities. We can speculate that in the near future cities in China might be moving among the strongest producers if a fair number of papers start receiving enough citations, which obviously depends on the quality of the research carried out in the last years. This is the case of cities like Tokyo which has gradually approached the citation balance in recent years. For instance, Table ( 2) shows that in 1990 Tokyo, was among the top consumers. But by 2009, its contribution to citation consumption had become less significant as observed from Figure (4) and Table (1) . Ranking Cities. Authors, departments, institutions, government and many funding agencies are extremely interested in defining the most important sources of knowledge. The necessity to find objective measures of the importance of papers, authors, journals, and disciplines leads to the definition of a wide variety of rankings [23, 24] . Measures such as impact factor, number of citations and h-index [14] are commonly used to assess the importance of scientific production. However, these common indicators might fail to account for the actual importance and prestige associated to each publication. In order to overcome these limitations, many different measures have been proposed [25, 26, 27, 28] . Here we introduce the scientific production ranking algorithm (SPR), an iterative algorithm based on the notion of diffusing scientific credits. , and others ranking metrics. In the algorithm each node receives a credit that is redistributed to its neighbours at the next iteration until the process converges in a stationary distribution of credit to all nodes (see Methods section for the formal definition). The credits diffuse following citations links self-consistently, implying that not all links have the same importance. Any city in the network will be more prominent in rank if it receives citations from high-rank sources. This process ensures that the rank of each city is self-consistently determined not just by the raw number of citations but also if the citations come from highly ranked cities. In Figure ( Table ( 3) we provide a quantitative measure of the change in the landscape of the most highly ranked cities in the world by showing the percentage of cities in the top 100 ranks for different continents. In Figure (7) , we compare the ranking obtained by our recursive algorithm with the ranking obtained by considering the total volume of publications produced in each city. Since we are considering only journals by the APS, the impact factor is consistent across all cities and does not include disproportionate effects that often happen when mixing disciplines or journal with varied readership. It is then natural to consider a ranking based on the raw productivity of each place. As we see in the figure though the two rankings, although obviously correlated, provide different results. A number of cities whose ranking, according to productivity, is in the Top 20 cities in the world, are ranked one order of magnitude lower by the SPR algorithm. Valuing the number of citations and their origin in the ranking of cities produces results often not consistent with the raw number of papers, signaling that in some places a large fraction of papers are not producing knowledge as they are not cited. We believe that the present algorithm may be considered as an appropriate way to rank scientific production taking properly into account the impact of papers as measured by citations. 7 In this paper we study the scientific knowledge flows among cities as measured by papers and citations contained in APS [31] journals. In order to make clear the meaning and difference between producers and consumers in the context of knowledge, we propose an economical analogy referring to citations as a traded currency between urban areas. We then study the flow of citations from producers to consumers with the knowledge production proxy algorithm. Finally, we rank the importance of cities as function of time using the scientific production ranking algorithm. We also find that some European, Russian and Japanese cities have gradually improved their productivities and ranks in recent twenty years. Similarly, such growth in scientific production has been observed by King [19] in the ISI database. As discussed in detail in the SI, by aggregating citations of cities to their respective countries, we find the same correlation between the number of citations, as well as the number of papers, and the GDP invested on Research and Development of several countries as reported by Pan et al. [7] based on the ISI database. This analogy between our results, and many others in the literature, suggests that the APS dataset, although limited, is representative of the overall scientific production of the largest countries and cities in the recent 20 years. The methodology proposed in this paper could be readily extended to larger datasets for which the geolocalization of multiple affiliation is possible. In view of the different rate of publications and citations in different scientific fields we believe however that the analysis of scientific knowledge production should only consider homogeneous datasets. This would help the understanding of knowledge flows in different areas and identify the hot spot of each discipline worldwide. Dataset. The dataset also provides 4, 710, 548 records of citations between articles published in APS journals. To build citation networks at the city level, we merge the citation links from the same source node to the same target node, and put the total citations on this link as the weight. For articles with multiple city names, the weight will be equally distributed to the links of these nodes. There are totally 2, 765, 565 links for city-tocity citation networks from 1960 to 2009. (For the full details of parsing country and city names, as well as building networks, see Supplementary Information (SI)) Knowledge diffusion proxy algorithm. This analysis tool is inspired by the dollar experiment, originally developed to characterized the flow of money in economic networks [48] . Formally, it is a biased random walk with sources and sinks where a citation diffuses in the network. The diffusion takes place on top of the network of net trade flows. Let us define w ij as the number of citation that node i gives to j and w ji as the opposite flow. We can define the antisymmetric matrix T ij = w ij − w ji . The network of the net trade is defined by the matrix F with F ij = |T ij | = |T ji | for all connected pairs (i, j) with T ij < 0 and F ij = 0 for all connected pairs (i, j) with T ij ≥ 0. There are two types of nodes. Producers are nodes with a positive trade unbalance ∆s i = s in i − s out i = j F ji − j F ij . Their strength-in is larger than their strength-out. On the other hand, consumers are nodes with a negative unbalance ∆s. On top of this network a citation is injected in a producer city. The citation follows the outgoing edges with a probability proportional to their intensities, and the probability that the citation is absorbed in a consumer city j equals to P abs (j) = ∆s j /s in j . By repeating many times this process from each starting point (producers) we can build a matrix with elements e ij that measure how many times a citation injected in the producer city i is absorbed in a city consumer j.
In addition, Zhang et al. used total knowledge import and total knowledge export to calculate the relative trade imbalance of each entity, where a negative or positive value indicates that the entity is a consumer or producer REF .
5378687
Characterizing scientific production and consumption in Physics
{ "venue": "Nature Scientific Reports 3, 1640 (2013)", "journal": null, "mag_field_of_study": [ "Computer Science", "Medicine", "Physics" ] }
We propose a unified model combining the strength of extractive and abstractive summarization. On the one hand, a simple extractive model can obtain sentence-level attention with high ROUGE scores but less readable. On the other hand, a more complicated abstractive model can obtain word-level dynamic attention to generate a more readable paragraph. In our model, sentence-level attention is used to modulate the word-level attention such that words in less attended sentences are less likely to be generated. Moreover, a novel inconsistency loss function is introduced to penalize the inconsistency between two levels of attentions. By end-to-end training our model with the inconsistency loss and original losses of extractive and abstractive models, we achieve state-of-theart ROUGE scores while being the most informative and readable summarization on the CNN/Daily Mail dataset in a solid human evaluation.
REF propose a unified model via inconsistency loss to combine the extractive and abstractive methods.
21723747
A Unified Model for Extractive and Abstractive Summarization using Inconsistency Loss
{ "venue": "ArXiv", "journal": "ArXiv", "mag_field_of_study": [ "Computer Science" ] }
Abstract-Mobile edge computing (also known as fog computing) has recently emerged to enable in-situ processing of delay-sensitive applications at the edge of mobile networks. Providing grid power supply in support of mobile edge computing, however, is costly and even infeasible (in certain rugged or under-developed areas), thus mandating on-site renewable energy as a major or even sole power supply in increasingly many scenarios. Nonetheless, the high intermittency and unpredictability of renewable energy make it very challenging to deliver a high quality of service to users in energy harvesting mobile edge computing systems. In this paper, we address the challenge of incorporating renewables into mobile edge computing and propose an efficient reinforcement learning-based resource management algorithm, which learns on-the-fly the optimal policy of dynamic workload offloading (to the centralized cloud) and edge server provisioning to minimize the long-term system cost (including both service delay and operational cost). Our online learning algorithm uses a decomposition of the (offline) value iteration and (online) reinforcement learning, thus achieving a significant improvement of learning rate and run-time performance when compared to standard reinforcement learning algorithms such as Q-learning. We prove the convergence of the proposed algorithm and analytically show that the learned policy has a simple monotone structure amenable to practical implementation. Our simulation results validate the efficacy of our algorithm, which significantly improves the edge computing performance compared to fixed or myopic optimization schemes and conventional reinforcement learning algorithms.
Xu et al. REF solved the challenge of incorporating renewable energy into MEC and proposed an effective reinforcement learning-based resource management algorithm to minimize long-term system costs (including service delays and operations cost).
18453286
Online Learning for Offloading and Autoscaling in Energy Harvesting Mobile Edge Computing
{ "venue": "IEEE Transactions on Cognitive Communications and Networking", "journal": "IEEE Transactions on Cognitive Communications and Networking", "mag_field_of_study": [ "Mathematics", "Computer Science" ] }
Removing rain streaks from a single image has been drawing considerable attention as rain streaks can severely degrade the image quality and affect the performance of existing outdoor vision tasks. While recent CNN-based derainers have reported promising performances, deraining remains an open problem for two reasons. First, existing synthesized rain datasets have only limited realism, in terms of modeling real rain characteristics such as rain shape, direction and intensity. Second, there are no public benchmarks for quantitative comparisons on real rain images, which makes the current evaluation less objective. The core challenge is that real world rain/clean image pairs cannot be captured at the same time. In this paper, we address the single image rain removal problem in two ways. First, we propose a semi-automatic method that incorporates temporal priors and human supervision to generate a high-quality clean image from each input sequence of real rain images. Using this method, we construct a large-scale dataset of ∼29.5K rain/rain-free image pairs that covers a wide range of natural rain scenes. Second, to better cover the stochastic distribution of real rain streaks, we propose a novel SPatial Attentive Network (SPANet) to remove rain streaks in a local-to-global manner. Extensive experiments demonstrate that our network performs favorably against the state-of-the-art deraining methods.
Wang et al.'s REF propose spatial attentive residual blocks to remove rain streaks in a local-to-global manner.
91184545
Spatial Attentive Single-Image Deraining With a High Quality Real Rain Dataset
{ "venue": "2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)", "journal": "2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)", "mag_field_of_study": [ "Computer Science" ] }
Differential privacy has recently emerged as the de facto standard for private data release. This makes it possible to provide strong theoretical guarantees on the privacy and utility of released data. While it is well-understood how to release data based on counts and simple functions under this guarantee, it remains to provide general purpose techniques that are useful for a wider variety of queries. In this paper, we focus on spatial data, i.e., any multi-dimensional data that can be indexed by a tree structure. Directly applying existing differential privacy methods to this type of data simply generates noise. We propose instead the class of "private spatial decompositions": these adapt standard spatial indexing methods such as quadtrees and kd-trees to provide a private description of the data distribution. Equipping such structures with differential privacy requires several steps to ensure that they provide meaningful privacy guarantees. Various basic steps, such as choosing splitting points and describing the distribution of points within a region, must be done privately, and the guarantees of the different building blocks must be composed into an overall guarantee. Consequently, we expose the design space for private spatial decompositions, and analyze some key examples. A major contribution of our work is to provide new techniques for parameter setting and post-processing of the output to improve the accuracy of query answers. Our experimental study demonstrates that it is possible to build such decompositions efficiently, and use them to answer a variety of queries privately and with high accuracy.
Cormode et al. REF adapted standard spatial indexing techniques, such as quadtree and kd-tree, to decompose data space differential-privately.
5940611
Differentially Private Spatial Decompositions
{ "venue": "2012 IEEE 28th International Conference on Data Engineering", "journal": "2012 IEEE 28th International Conference on Data Engineering", "mag_field_of_study": [ "Computer Science" ] }
Query reformulation techniques based on query logs have been studied as a method of capturing user intent and improving retrieval effectiveness. The evaluation of these techniques has primarily, however, focused on proprietary query logs and selected samples of queries. In this paper, we suggest that anchor text, which is readily available, can be an effective substitute for a query log and study the effectiveness of a range of query reformulation techniques (including log-based stemming, substitution, and expansion) using standard TREC collections. Our results show that logbased query reformulation techniques are indeed effective with standard collections, but expansion is a much safer form of query modification than word substitution. We also show that using anchor text as a simulated query log is as least as effective as a real log for these techniques.
Anchor text has for instance been shown to be an effective substitute for query logs in generating query reformulations REF , which indicates that these two resources should be correlated.
6331792
Query reformulation using anchor text
{ "venue": "WSDM '10", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
In the Shift Bribery problem, we are given an election (based on preference orders), a preferred candidate p, and a budget. The goal is to ensure that p wins by shifting p higher in some voters' preference orders. However, each such shift request comes at a price (depending on the voter and on the extent of the shift) and we must not exceed the given budget. We study the parameterized computational complexity of Shift Bribery with respect to a number of parameters (pertaining to the nature of the solution sought and the size of the election) and several classes of price functions. When we parameterize Shift Bribery by the number of affected voters, then for each of our voting rules (Borda, Maximin, Copeland) the problem is W[2]-hard. If, instead, we parameterize by the number of positions by which p is shifted in total, then the problem is fixed-parameter tractable for Borda and Maximin, and is W[1]-hard for Copeland. If we parameterize by the budget, then the results depend on the price function class. We also show that Shift Bribery tends to be tractable when parameterized by the number of voters, but that the results for the number of candidates are more enigmatic. 1 For example, the German website idealo.de aggregates different product tests by first translating the test results into a unified rating system and then taking the "average" of all the ratings. Various university rankings are prepared in a similar way. It would be very interesting, however, to utilize the rankings themselves, instead of the ratings, for the aggregation. Moreover, Formula 1 racing (and numerous similar competitions) use pure ranking information (e.g., Formula 1 uses a very slightly modified variant of the Borda election rule). 2 We assume that we have the knowledge of the voters' preference orders (for example, from preelection polls). Further, in our example settings often the full rankings are known. For example, a driver preparing for a new Formula 1 season has full knowledge of the results from the previous one. 3 What "to convince" means can vary a lot depending on the application scenario. On the evil side we have bribery, but it can also mean things such as product development, hiring more faculty members, training on a particular racing circuit, or explaining the details of one's political platform. Clearly, different ranking providers may appreciate different efforts, which is modeled by the individual price functions.
Chen et al. REF considered the parametrized complexity of Constructive Shift Bribery, and have shown a varied set of results (in general, parametrization by the number of positions by which the preferred candidate is shifted tends to lead to FPT-time algorithms, parametrization by the number of affected voters tends to lead to hardness results, and parametrization by the available budget gives results between these two extremes).
16997750
Prices Matter for the Parameterized Complexity of Shift Bribery
{ "venue": "ArXiv", "journal": "ArXiv", "mag_field_of_study": [ "Computer Science", "Mathematics" ] }
Abstract-In this paper, we investigate the performance of classification of protein crystallization images captured during protein crystal growth process. We group protein crystallization images into 3 categories: noncrystals, likely leads (conditions that may yield formation of crystals) and crystals. In this research, we only consider the subcategories of noncrystal and likely leads protein crystallization images separately. We use 5 different classifiers to solve this problem and we applied some data preprocessing methods such as principal component analysis (PCA), min-max (MM) normalization and z-score (ZS) normalization methods to our datasets in order to evaluate their effects on classifiers for the noncrystal and likely leads datasets. We performed our experiments on 1606 noncrystal and 245 likely leads images independently. We had satisfactory results for both datasets. We reached 96.8% accuracy for noncrystal dataset and 94.8% accuracy for likely leads dataset. Our target is to investigate the best classifiers with optimal preprocessing techniques on both noncrystal and likely leads datasets.
In our previous work REF , we evaluated the classification performance using 5 different classifiers, and feature reduction using principal components analysis (PCA) and normalization methods for the non-crystal and likely-lead datasets.
22177610
Evaluation of normalization and PCA on the performance of classifiers for protein crystallization images
{ "venue": "IEEE SOUTHEASTCON 2014", "journal": "IEEE SOUTHEASTCON 2014", "mag_field_of_study": [ "Computer Science", "Medicine" ] }
This paper introduces an approach to sentiment analysis which uses support vector machines (SVMs) to bring together diverse sources of potentially pertinent information, including several favorability measures for phrases and adjectives and, where available, knowledge of the topic of the text. Models using the features introduced are further combined with unigram models which have been shown to be effective in the past (Pang et al., 2002) and lemmatized versions of the unigram models. Experiments on movie review data from Epinions.com demonstrate that hybrid SVMs which combine unigram-style feature-based SVMs with those based on real-valued favorability measures obtain superior performance, producing the best results yet published using this data. Further experiments using a feature set enriched with topic information on a smaller dataset of music reviews handannotated for topic are also reported, the results of which suggest that incorporating topic information into such models may also yield improvement.
Mullen and Collier REF employ a hybrid SVM approach by making use of potentially pertinent information, including several favorability measures of terms and knowledge of the topics.
5651839
Sentiment Analysis Using Support Vector Machines With Diverse Information Sources
{ "venue": "SIGDAT Conference On Empirical Methods In Natural Language Processing", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
Ontologies have become an important means for structuring knowledge and building knowledge-intensive systems. For this purpose, efforts have been made to facilitate the ontology engineering process, in particular the acquisition of ontologies from domain texts. We present a general architecture for discovering conceptual structures and engineering ontologies. Based on our generic architecture we describe a case study for mining ontologies from text using methods based on dictionaries and natural language text. The case study has been carried out in the telecommunications domain. Supporting the overall text ontology engineering process, our comprehensive approach combines dictionary parsing mechanisms for acquiring a domain-specific concept taxonomy with a discovery mechanism for the acquisition of non-taxonomic conceptual relations.
Semi-automatic methods focus on the acquisition of ontologies from domain texts REF .
11050085
null
null
In this paper, we investigate the use of hierarchical reinforcement learning (HRL) to speed up the acquisition of cooperative multi-agent tasks. We introduce a hierarchical multiagent reinforcement learning (RL) framework, and propose a hierarchical multi-agent RL algorithm called Cooperative HRL. In this framework, agents are cooperative and homogeneous (use the same task decomposition). Learning is decentralized, with each agent learning three interrelated skills: how to perform each individual subtask, the order in which to carry them out, and how to coordinate with other agents. We define cooperative subtasks to be those subtasks in which coordination among agents significantly improves the performance of the overall task. Those levels of the hierarchy which include cooperative subtasks are called cooperation levels. A fundamental property of the proposed approach is that it allows agents to learn coordination faster by sharing information at the level of cooperative subtasks, rather than attempting to learn coordination at the level of primitive actions. We study the empirical performance of the Cooperative HRL algorithm using two testbeds: a simulated two-robot trash collection task, and a larger four-agent automated guided vehicle (AGV) scheduling problem. We compare the performance and speed of Cooperative HRL with other learning algorithms, as well as several well-known industrial AGV heuristics. We also address the issue of rational communication behavior among autonomous agents in this paper. The goal is for agents to learn both action and communication policies that together optimize the task given a communication cost. We extend the multi-agent HRL framework to include communication decisions and propose a cooperative multi-agent HRL algorithm called COM-Cooperative HRL. In this algorithm, we add a communication level to the hierarchical decomposition of the problem below each cooperation level. Before an agent makes a decision at a cooperative subtask, it decides if it is worthwhile to perform a communication action. A communication action has a certain cost and provides the agent with the actions selected by the other agents at a cooperation level. We demonstrate the efficiency of the COM-Cooperative HRL algorithm as well as the relation between the communication cost and the learned communication policy using a multi-agent taxi problem.
REF presents with the MAXQ framework a hierarchical approach to multi-agent reinforcement learning.
2384747
Hierarchical multi-agent reinforcement learning
{ "venue": "Autonomous Agents and Multi-Agent Systems", "journal": "Autonomous Agents and Multi-Agent Systems", "mag_field_of_study": [ "Computer Science" ] }
The field of digital libraries (DLs) coalesced in 1994: the first digital library conferences were held that year, awareness of the World Wide Web was accelerating, and the National Science Foundation awarded $24 Million (U.S.) for the Digital Library Initiative (DLI). In this paper we examine the state of the DL domain after a decade of activity by applying social network analysis to the co-authorship network of the past ACM, IEEE, and joint ACM/IEEE digital library conferences. We base our analysis on a common binary undirectional network model to represent the co-authorship network, and from it we extract several established network measures. We also introduce a weighted directional network model to represent the co-authorship network, for which we define AuthorRank as an indicator of the impact of an individual author in the network. The results are validated against conference program committee members in the same period. The results show clear advantages of PageRank and AuthorRank over degree, closeness and betweenness centrality metrics. We also investigate the amount and nature of international participation in Joint Conference on Digital Libraries (JCDL).
For example, Liu et al. REF also applied PageRank to the co-authorship network in order to rank scientists.
2536130
Co-Authorship Networks in the Digital Library Research Community
{ "venue": "ArXiv", "journal": "ArXiv", "mag_field_of_study": [ "Computer Science" ] }
Network diagnosis, an essential research topic for traditional networking systems, has not received much attention for wireless sensor networks. Existing sensor debugging tools like sympathy or EmStar rely heavily on an add-in protocol that generates and reports a large amount of status information from individual sensor nodes, introducing network overhead to a resource constrained and usually traffic sensitive sensor network. We report in this study our initial attempt at providing a light-weight network diagnosis mechanism for sensor networks. We propose PAD, a probabilistic diagnosis approach for inferring the root causes of abnormal phenomena. PAD employs a packet marking algorithm for efficiently constructing and dynamically maintaining the inference model. Our approach does not incur additional traffic overhead for collecting desired information. Instead, we introduce a probabilistic inference model which encodes internal dependencies among different network elements, for online diagnosis of an operational sensor network system. Such a model is capable of additively reasoning root causes based on passively observed symptoms. We implement the PAD design in our sea monitoring sensor network test-bed and validate its effectiveness. We further evaluate the efficiency and scalability of this design through extensive trace-driven simulations.
PAD REF leverages a packet marking strategy for constructing and maintaining the inference model.
6339912
Passive diagnosis for wireless sensor networks
{ "venue": "SenSys '08", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
This paper presents a study on the role of discourse markers in argumentative discourse. We annotated a German corpus with arguments according to the common claim-premise model of argumentation and performed various statistical analyses regarding the discriminative nature of discourse markers for claims and premises. Our experiments show that particular semantic groups of discourse markers are indicative of either claims or premises and constitute highly predictive features for discriminating between them.
The role of discourse markers in the identification of claims and premises were discussed in REF , who conclude that such markers are moderately useful for identifying argumentative sentences.
88666
On the Role of Discourse Markers for Discriminating Claims and Premises in Argumentative Discourse
{ "venue": "EMNLP", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
In this paper, we present an election prediction system (Crystal) based on web users' opinions posted on an election prediction website. Given a prediction message, Crystal first identifies which party the message predicts to win and then aggregates prediction analysis results of a large amount of opinions to project the election results. We collect past election prediction messages from the Web and automatically build a gold standard. We focus on capturing lexical patterns that people frequently use when they express their predictive opinions about a coming election. To predict election results, we apply SVM-based supervised learning. To improve performance, we propose a novel technique which generalizes n-gram feature patterns. Experimental results show that Crystal significantly outperforms several baselines as well as a non-generalized n-gram approach. Crystal predicts future elections with 81.68% accuracy.
REF in turn describe a supervised learning system that predicts which party is going to win the election on the basis of opinions posted on an election prediction website (accuracy ∼81.68%).
9656604
Crystal: Analyzing Predictive Opinions on the Web
{ "venue": "2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL)", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
The The methods for alleviating the state explosion problem in model checking can be classified coarsely into symbolic methods and abstraction methods [6] . By symbolic methods we understand the use of succinct data structures and symbolic algorithms which help keep state explosion under control by compressing information, using, e.g., binary decision diagrams or efficient SAT procedures. Abstraction methods in contrast attempt to reduce the size of the state space by employing knowledge about the system and the specification in order to model only relevant features in the Kripke structure. An abstraction function associates a Kripke structure à with an abstract Kripke structure à such that two properties hold: Preservation. à preserves all behaviors of Ã. Preservation ensures that every universal specification which is true in à is also true in Ã. The converse implication, however, will not hold in general: a universal property which is false in à may still be true in Ã. In this case, the counterexample obtained over à cannot be reconstructed for the concrete Kripke structure Ã, and is called a spurious counterexample [10] , or a false negative. An important example of abstraction is existential abstraction [11] where the abstract states are essentially taken to be equivalence classes of concrete states; a transition between two abstract states holds if there was a transition between any two concrete member states in the corresponding equivalence classes. In certain cases, the user knowledge about the system will be sufficient to allow manual determination of a good abstraction function. In general, however, finding abstraction functions gives rise to the following dichotomy: If à is too small, then spurious counterexamples are likely to occur. If à is too large, then verification remains infeasible. Counterexample-Guided Abstraction Refinement (CEGAR) is a natural approach to resolve this situation by using an adaptive algorithm which gradually improves an abstraction function by analysing spurious counterexamples. (i) Initialization. Generate an initial abstraction function. (ii) Model Checking. Verify the abstract model. If verification is successful, the specification is correct, and the algorithm terminates successfully. Otherwise, generate a counterexample Ì on the abstract model. (iii) Sanity Check. Determine, if the abstract counterexample Ì is spurious. If a concrete counterexample
The algorithm in this paper differs from counterexample-guided abstraction refinement REF in that it makes no direct use of concrete counterexamples.
9192961
Counterexample-guided abstraction refinement
{ "venue": "In Computer Aided Verification", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
Abstract. In the research of the relationship between the formal and the computational view of cryptography, a recent approach, first proposed in [9] , uses static equivalence from cryptographic pi calculi as a notion of formal indistinguishability. Previous work [9, 1] has shown that this yields the soundness of natural interpretations of some interesting equational theories, such as certain cryptographic operations and a theory of XOR. In this paper however, we argue that static equivalence is too coarse for sound interpretations of equational theories in general. We show some explicit examples how static equivalence fails to work in interesting cases. To fix this problem, we propose a notion of formal indistinguishability that is more flexible than static equivalence. We provide a general framework along with general theorems, and then discuss how this new notion works for the explicit examples where static equivalence failed to ensure soundness. We also improve the treatment by using ordered sorts in the formal view, and by allowing arbitrary probability distributions of the interpretations.
In REF , Bana et al. argue that the notion of static equivalence is too coarse and not sound for many interesting equational theories.
14040589
Computational Soundness of Formal Indistinguishability and Static Equivalence
{ "venue": "IN PROC. 11TH ASIAN COMPUTING SCIENCE CONFERENCE (ASIAN’06), LNCS", "journal": null, "mag_field_of_study": [ "Mathematics", "Computer Science" ] }
Abstract. Gas is a measurement unit of the computational effort that it will take to execute every single operation that takes part in the Ethereum blockchain platform. Each instruction executed by the Ethereum Virtual Machine (EVM) has an associated gas consumption specified by Ethereum. If a transaction exceeds the amount of gas allotted by the user (known as gas limit), an out-of-gas exception is raised. There is a wide family of contract vulnerabilities due to out-of-gas behaviors. We report on the design and implementation of Gastap, a Gas-Aware Smart contracT Analysis Platform, which takes as input a smart contract (either in EVM, disassembled EVM, or in Solidity source code) and automatically infers sound gas upper bounds for all its public functions. Our bounds ensure that if the gas limit paid by the user is higher than our inferred gas bounds, the contract is free of out-of-gas vulnerabilities.
GASTAP REF infers sound gas upper bounds for all public functions in a smart contracts with complex transformation and analysis processes on the code.
53747910
GASTAP: A Gas Analyzer for Smart Contracts
{ "venue": "ArXiv", "journal": "ArXiv", "mag_field_of_study": [ "Computer Science" ] }
There has been an explosion of academic literature on steganography and steganalysis in the past two decades. With a few exceptions, such papers address abstractions of the hiding and detection problems, which arguably have become disconnected from the real world. Most published results, including by the authors of this paper, apply "in laboratory conditions" and some are heavily hedged by assumptions and caveats; significant challenges remain unsolved in order to implement good steganography and steganalysis in practice. This position paper sets out some of the important questions which have been left unanswered, as well as highlighting some that have already been addressed successfully, for steganography and steganalysis to be used in the real world.
Steganography has been the focus of much research interest over the past few decades, as well as increasing applicability into the real world REF .
7930968
Moving steganography and steganalysis from the laboratory into the real world
{ "venue": "IH&MMSec '13", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
Abstract-Spatio-temporal preferences and encounter statistics provide realistic measures to understand mobile user's behavioral preferences and transfer opportunities in Delay Tolerant Networks (DTNs). The time dependent behavior and periodic reappearances at specific locations can approximate future online presence while encounter statistics can aid to forward the routing decisions. It is theoretically shown that such characteristics heavily affect the performance of routing protocols. Therefore, mobility models demonstrating such characteristics are also expected to show identical routing performance. However, we argue models despite capturing these properties deviate from their expected routing performance. We use realistic traces to validate this observation on two mobility models. Our empirical results for epidemic routing show those models' largely differ (delay 67% & reachability 79%) from the observed values. This in-turn call for two important activities: (i) Analogous to routing, explore structural properties on a Global scale (ii) Design new mobility models that capture them.
Thakur et al. REF found that the performance of the mobility models is not analogous to realistic trends.
18162229
Analysis of Spatio-Temporal Preferences and Encounter Statistics for DTN Performance
{ "venue": "ArXiv", "journal": "ArXiv", "mag_field_of_study": [ "Computer Science" ] }
In the area of speech synthesis it is already possible to generate understandable speech with citation form prosody for simple written texts. However at ATR we are researching into speech synthesis techniques for use in a speech translation environment. Dialogues in such conversations involve much richer forms of prosodic variation than are required for the reading of texts. In order for our translations to sound natural it is necessary for our synthesis system to o er a wide range of prosodic variability, which can be described at an appropriate level of abstraction. This paper describes a multi-level intonation system which generates a fundamental frequency (F 0 ) contour based on input labelled with high level discourse information, including speech act type and focusing information, as well as part of speech and syntactic constituent structure. The system is rule driven but the rules and to some extent the levels themselves are derived from naturally spoken dialogues. This paper presents a framework for generating intonation parameters based on existing natural speech dialogues labelled with that intonation system, and marked with high level discourse features. The goal of this study is to predict the intonation of discourse segments in spoken dialogue for synthesis in a speech-translation system. Spontaneous spoken dialogue involves more use of intonational variety than does reading of written prose, so the intonation speci cation component of our speech synthesizer has to take into account the prosody of di erent speech act types, and must allow for the generation of utterances with the same variability as found in natural dialogue. For example the simple English word \okay" is heard often in conversation but performs di erent functions. Sometimes it has the meaning \I understand.", sometimes \do you understand?", other times it is used as a discourse marker indicating a change of topic, or as an end-of-turn marker signalling for the other partner to speak. Di erent uses of a word have di erent intonational tunes. Already there are a number of intonation systems which allow a speci cation of intonation at a higher level of abstraction than directly representing the fundamental frequency contour ( ToBI o ers a discrete symbolic representation of linguistic intonation patterns; while Tilt o ers a representation of physical pitch patterns, that di erence is not signi cant in this work. All these intonation systems o er a method of representation from which varied F0 contours may be generated. In this paper we are primarily concerned with a level of discourse intonation \above" these intonation systems. That is a system that will predict intonation parameters (for whatever intonation system being used) from higher level discourse information such as speech act, discourse function, syntactic structure and part of speech information. The following diagram positions this work in the process of generating a F 0 contour in speech synthesis.
REF presented a model for generating intonation patterns based on high-level discourse features automatically extracted from dialogue speech.
14995818
Predicting the Intonation of Discourse Segments From Examples in Dialogue Speech
{ "venue": "ESCA/Aalborg University", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
Abstract: Device-to-device (D2D) communication is proposed as a promising technique of future cellular networks which fulfills its potential in terms of high resource utilization. In this paper, in order to improve the achievable rate of D2D communication and the spectrum utilization, we consider the scenario that multiple D2D pairs can share uplink spectrum resources with multiple cellular users (CUs). We aim to maximize the overall system spectrum efficiency while satisfying the rate requirements of all CUs and guaranteeing that the system gain is positive. We formulate the joint optimization problem of subcarrier assignment and power allocation which falls naturally into a mixed integer non-linear programming form that is a difficult problem to solve. Hence, we propose a two-stage resource allocation scheme which comprises a subcarrier assignment by employing a heuristic greedy strategy, as well as a power allocation algorithm based on the Lagrangian dual method. Numerical results demonstrate the advantageous performance of our scheme in greatly increasing the system sum spectrum efficiency.
In REF , the authors study the problem of maximizing the overall system spectrum efficiency while satisfying the rate requirements of all cellular users, in which a two-stage resource allocation scheme (comprising a subcarrier assignment with a greedy method and a power allocation algorithm with the Lagrangian dual method) is proposed to deal with the interference in the network with multiple D2D pairs sharing uplink spectrum resources with the cellular users.
43380533
A Resource Allocation Scheme for Multi-D2D Communications Underlying Cellular Networks with Multi-Subcarrier Reusing
{ "venue": null, "journal": "Applied Sciences", "mag_field_of_study": [ "Mathematics" ] }
As deep nets are increasingly used in applications suited for mobile devices, a fundamental dilemma becomes apparent: the trend in deep learning is to grow models to absorb everincreasing data set sizes; however mobile devices are designed with very little memory and cannot store such large models. We present a novel network architecture, HashedNets, that exploits inherent redundancy in neural networks to achieve drastic reductions in model sizes. HashedNets uses a low-cost hash function to randomly group connection weights into hash buckets, and all connections within the same hash bucket share a single parameter value. These parameters are tuned to adjust to the HashedNets weight sharing architecture with standard backprop during training. Our hashing procedure introduces no additional memory overhead, and we demonstrate on several benchmark data sets that HashedNets shrink the storage requirements of neural networks substantially while mostly preserving generalization performance.
REF used hash functions to reduce model size without sacrificing generalization performance.
543597
Compressing Neural Networks with the Hashing Trick
{ "venue": "ArXiv", "journal": "ArXiv", "mag_field_of_study": [ "Mathematics", "Computer Science" ] }
Detailed connectivities have been studied in animals through invasive tracer techniques, but these invasive studies cannot be done in humans, and animal results cannot always be extrapolated to human systems. We have developed noninvasive neuronal fiber tracking for use in living humans, utilizing the unique ability of MRI to characterize water diffusion. We reconstructed fiber trajectories throughout the brain by tracking the direction of fastest diffusion (the fiber direction) from a grid of seed points, and then selected tracks that join anatomically or functionally (functional MRI) defined regions. We demonstrate diffusion tracking of fiber bundles in a variety of white matter classes with examples in the corpus callosum, geniculo-calcarine, and subcortical association pathways. Tracks covered long distances, navigated through divergences and tight curves, and manifested topological separations in the geniculo-calcarine tract consistent with tracer studies in animals and retinotopy studies in humans. Additionally, previously undescribed topologies were revealed in the other pathways. This approach enhances the power of modern imaging by enabling study of fiber connections among anatomically and functionally defined brain regions in individual human subjects.
Conturo et al. REF use volumetric regions of interest to select pathways that connect anatomically or functionally defined regions.
1237851
Tracking neuronal fiber pathways in the living human brain
{ "venue": "Proceedings of the National Academy of Sciences of the United States of America", "journal": "Proceedings of the National Academy of Sciences of the United States of America", "mag_field_of_study": [ "Biology", "Medicine" ] }
Bots are, for many Web and social media users, the source of many dangerous attacks or the carrier of unwanted messages, such as spam. Nevertheless, crawlers and software agents are a precious tool for analysts, and they are continuously executed to collect data or to test distributed applications. However, no one knows which is the real potential of a bot whose purpose is to control a community, to manipulate consensus, or to influence user behavior. It is commonly believed that the better an agent simulates human behavior in a social network, the more it can succeed to generate an impact in that community. We contribute to shed light on this issue through an online social experiment aimed to study to what extent a bot with no trust, no profile, and no aims to reproduce human behavior, can become popular and influential in a social media. Results show that a basic social probing activity can be used to acquire social relevance on the network and that the so-acquired popularity can be effectively leveraged to drive users in their social connectivity choices. We also register that our bot activity unveiled hidden social polarization patterns in the community and triggered an emotional response of individuals that brings to light subtle privacy hazards perceived by the user base.
REF created a bot that become highly connected in a social network for book lovers.
14017316
People are Strange when you're a Stranger: Impact and Influence of Bots on Social Networks
{ "venue": "ArXiv", "journal": "ArXiv", "mag_field_of_study": [ "Computer Science", "Physics" ] }
Abstract-Wireless spectrum sharing techniques have become very important due to the increasing demand for spectrum. Hence, there is growing interest in using real-time auctions for economically incentivizing users to share their excess spectrum. However, the communication and control requirements of real-time secondary spectrum auctions would be overwhelming for wireless networks. Prior literature has not considered critical communication constraints such as bid price quantization and error prone bid revelation. These schemes also have high overheads which cannot be accommodated in wireless standards. We propose auction schemes where a central clearing authority auctions spectrum to bidders, while explicitly accounting for these communication constraints. Our techniques are related to the posterior matching scheme, which is used in systems with channel output feedback. We consider several scenarios where the clearing authority's objective is to award spectrum to bidders who value spectrum the most. We prove that this objective is asymptotically attained by our scheme when bidders are nonstrategic with constant bids. We propose separate schemes to make strategic users reveal their private values truthfully, auction multiple subchannels among strategic users, and track slowly time-varying bid prices. We provide extensive simulation results to illustrate the performance and effectiveness of our algorithms.
Auction schemes where a central clearing authority auctions spectrum to bidders, while explicitly accounting for communication constraints is proposed in REF .
1800575
Secondary Spectrum Auctions for Markets With Communication Constraints
{ "venue": "IEEE Transactions on Wireless Communications", "journal": "IEEE Transactions on Wireless Communications", "mag_field_of_study": [ "Mathematics", "Computer Science" ] }
Abstract-The concept of vehicular ad-hoc networks enables the design of emergent automotive safety applications, which are based on the awareness among vehicles. Recently, a suite of 802.11p/WAVE protocols aimed at supporting car-to-car communications was approved by IEEE. Existing cellular infrastructure and, above all 3GPP LTE, is being considered as another communication technology appropriate for vehicular applications. This letter provides a theoretical framework which compares the basic patterns of both the technologies in the context of safetyof-life vehicular scenarios. We present mathematical models for the evaluation of the considered protocols in terms of successful beacon delivery probability.
Vinel REF compared the IEEE 802.11p/WAVE and LTE system in terms of delay and scalability for vehicular safety applications.
2197998
3GPP LTE Versus IEEE 802.11p/WAVE: Which Technology is Able to Support Cooperative Vehicular Safety Applications?
{ "venue": "IEEE Wireless Communications Letters", "journal": "IEEE Wireless Communications Letters", "mag_field_of_study": [ "Computer Science", "Mathematics" ] }
We present Fortunata, a wiki-based framework designed to simplify the creation of semantically-enabled web applications. This framework facilitates the management and publication of semantic data in web-based applications, to the extent that application developers do not need to be skilled in client-side technologies, and promotes application reuse by fostering collaboration among developers by means of wiki plugins. We ¡Ilústrate the use of this framework with two Fortunata-based applications named OMEMO and VPOET, and we evalúate it with two experiments performed with usability evaluators and application developers respectively. These experiments show a good balance between the usability of the applications created with this framework and the effort and skills required by developers.
Fortunata, as a wiki-based framework, could facilitate the management and publication of semantic data in webbased applications REF .
17029377
A contribution-based framework for the creation of semantically-enabled web applications
{ "venue": "Inf. Sci.", "journal": "Inf. Sci.", "mag_field_of_study": [ "Computer Science" ] }
The growing use of statistical and machine learning (ML) algorithms to analyze large datasets has given rise to new systems to scale such algorithms. But implementing new scalable algorithms in low-level languages is a painful process, especially for enterprise and scientific users. To mitigate this issue, a new breed of systems expose high-level bulk linear algebra (LA) primitives that are scalable. By composing such LA primitives, users can write analysis algorithms in a higher-level language, while the system handles scalability issues. But there is little work on a unified comparative evaluation of the scalability, efficiency, and effectiveness of such "scalable LA systems." We take a major step towards filling this gap. We introduce a suite of LA-specific tests based on our analysis of the data access and communication patterns of LA workloads and their use cases. Using our tests, we perform a comprehensive empirical comparison of a few popular scalable LA systems: MADlib, MLlib, SystemML, ScaLAPACK, SciDB, and TensorFlow using both synthetic data and a large real-world dataset. Our study has revealed several scalability bottlenecks, unusual performance trends, and even bugs in some systems. Our findings have already led to improvements in SystemML, with other systems' developers also expressing interest. All of our code and data scripts are available for download at https://adalabucsd.github.io/slab.html.
SLAB (Scalable Linear Algebra Benchmarking) REF presents a suite of LA-specific tests based on the analysis of data access and communication patterns of LA workloads.
13699031
A comparative evaluation of systems for scalable linear algebra-based analytics
{ "venue": "PVLDB", "journal": "PVLDB", "mag_field_of_study": [ "Computer Science" ] }
Abstract-High instruction cache hit rates are key to high performance. One known technique to improve the hit rate of caches is to minimize cache interference by improving the layout of the basic blocks of the code. However, the performance impact of this technique has been reported for application code only, even though there is evidence that the operating system often uses the cache heavily and with less uniform patterns than applications. It is unknown how well existing optimizations perform for systems code and whether better optimizations can be found. We address this problem in this paper. This paper characterizes, in detail, the locality patterns of the operating system code and shows that there is substantial locality. Unfortunately, caches are not able to extract much of it: Rarely-executed special-case code disrupts spatial locality, loops with few iterations that call routines make loop locality hard to exploit, and plenty of loop-less code hampers temporal locality. Based on our observations, we propose an algorithm to expose these localities and reduce interference in the cache. For a range of cache sizes, associativities, lines sizes, and organizations, we show that we reduce total instruction miss rates by 31-86 percent, or up to 2.9 absolute points. Using a simple model, this corresponds to execution time reductions of the order of 10-25 percent. In addition, our optimized operating system combines well with optimized and unoptimized applications.
Torrellas et al. REF designed a basic block reordering algorithm for operating system code.
6591791
Optimizing Instruction Cache Performance for Operating System Intensive Workloads
{ "venue": "IEEE Transactions on Computers", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
We describe our efforts to generate a large (100,000 instance) corpus of textual entailment pairs from the lead paragraph and headline of news articles. We manually inspected a small set of news stories in order to locate the most productive source of entailments, then built an annotation interface for rapid manual evaluation of further exemplars. With this training data we built an SVM-based document classifier, which we used for corpus refinement purposes-we believe that roughly three-quarters of the resulting corpus are genuine entailment pairs. We also discuss the difficulties inherent in manual entailment judgment, and suggest ways to ameliorate some of these.
Burger & Ferro REF , generate a large corpus of TE pairs (100.000 pairs) from the lead of paragraph and headline of English news articles.
2941806
Generating An Entailment Corpus From News Headlines
{ "venue": "Workshop On Empirical Modeling Of Semantic Equivalence And Entailment", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
Organizations increasingly define many business processes as projects executed by "virtual (project) teams", where team members from within an organization cooperate with "outside" experts. Virtual teams require and enable people to collaborate across geographical distance and professional (organizational) boundaries and have a somewhat stable team configuration with roles and responsibilities assigned to team members. Different people, coming from different organizations will have their own preferences and experiences and cannot be expected to undergo a long learning cycle before participating in team activities. Thus, efficient communication, coordination, and process-aware collaboration remain a fundamental challenge. In this paper we discuss the current shortcomings of approaches in the light of virtual teamwork (mainly Workflow, Groupware, and Project Management) based on models and underlying metaphors. Furthermore, we present a novel approach for virtual teamwork by tightly integrating all associations between processes, artifacts, and resources. In this paper we analyze (a) the relevant criteria for process-aware collaboration system metaphors, (b) coordination models and constructs for organizational structures of virtual teams as well as for ad hoc and collaborative processes composed out of tasks, and (c) architectural considerations as well as design and implementation issues for an integrated process-aware collaboration system for virtual teams on the Internet.
Conceptual foundations for process-aware collaborative WfMS are presented in REF .
6159808
Caramba—A Process-Aware Collaboration System Supporting Ad hoc and Collaborative Processes in Virtual Teams
{ "venue": "Distributed and Parallel Databases", "journal": "Distributed and Parallel Databases", "mag_field_of_study": [ "Computer Science" ] }
Abstract-In this paper, we study a multiple-antenna system where the transmitter is equipped with quantized information about instantaneous channel realizations. Assuming that the transmitter uses the quantized information for beamforming, we derive a universal lower bound on the outage probability for any finite set of beamformers. The universal lower bound provides a concise characterization of the gain with each additional bit of feedback information regarding the channel. Using the bound, it is shown that finite information systems approach the perfect information case as ( 1)2 , where is the number of feedback bits and is the number of transmit antennas. The geometrical bounding technique, used in the proof of the lower bound, also leads to a design criterion for good beamformers, whose outage performance approaches the lower bound. The design criterion minimizes the maximum inner product between any two beamforming vectors in the beamformer codebook, and is equivalent to the problem of designing unitary space-time codes under certain conditions. Finally, we show that good beamformers are good packings of two-dimensional subspaces in a 2 -dimensional real Grassmannian manifold with chordal distance as the metric.
Analytical results for the performance of optimally quantized beamformers are developed in REF , where a universal lower bound on the outage probability for any finite set of beamformers with quantized feedback is derived.
407688
On beamforming with finite rate feedback in multiple-antenna systems
{ "venue": "IEEE Transactions on Information Theory", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
The success of deep learning in vision can be attributed to: (a) models with high capacity; (b) increased computational power; and (c) availability of large-scale labeled data. Since 2012, there have been significant advances in representation capabilities of the models and computational capabilities of GPUs. But the size of the biggest dataset has surprisingly remained constant. What will happen if we increase the dataset size by 10× or 100×? This paper takes a step towards clearing the clouds of mystery surrounding the relationship between 'enormous data' and visual deep learning. By exploiting the JFT-300M dataset which has more than 375M noisy labels for 300M images, we investigate how the performance of current vision tasks would change if this data was used for representation learning. Our paper delivers some surprising (and some expected) findings. First, we find that the performance on vision tasks increases logarithmically based on volume of training data size. Second, we show that representation learning (or pretraining) still holds a lot of promise. One can improve performance on many vision tasks by just training a better base model. Finally, as expected, we present new state-of-theart results for different vision tasks including image classification, object detection, semantic segmentation and human pose estimation. Our sincere hope is that this inspires vision community to not undervalue the data and develop collective efforts in building larger datasets.
Sun et al. REF have shown that the greater the amount of data, the better the performance of deep learning networks.
6842201
Revisiting Unreasonable Effectiveness of Data in Deep Learning Era
{ "venue": "2017 IEEE International Conference on Computer Vision (ICCV)", "journal": "2017 IEEE International Conference on Computer Vision (ICCV)", "mag_field_of_study": [ "Computer Science" ] }
The problem of a total absence of parallel data is present for a large number of language pairs and can severely detriment the quality of machine translation. We describe a language-independent method to enable machine translation between a low-resource language (LRL) and a third language, e.g. English. We deal with cases of LRLs for which there is no readily available parallel data between the low-resource language and any other language, but there is ample training data between a closelyrelated high-resource language (HRL) and the third language. We take advantage of the similarities between the HRL and the LRL in order to transform the HRL data into data similar to the LRL using transliteration. The transliteration models are trained on transliteration pairs extracted from Wikipedia article titles. Then, we automatically back-translate monolingual LRL data with the models trained on the transliterated HRL data and use the resulting parallel corpus to train our final models. Our method achieves significant improvements in translation quality, close to the results that can be achieved by a general purpose neural machine translation system trained on a significant amount of parallel data. Moreover, the method does not rely on the existence of any parallel data for training, but attempts to bootstrap already existing resources in a related language.
Back translation is used to generate a pseudo-HRL from the monolingual data, while the HRL side of the parallel data is converted to pseudo-LRL using word substitution from a bilingual dictionary, similar to the approach in REF .
24860285
Neural machine translation for low-resource languages without parallel corpora
{ "venue": "Machine Translation", "journal": "Machine Translation", "mag_field_of_study": [ "Computer Science" ] }
We consider the problem of computing all pairs shortest paths (APSP) and shortest paths for k sources in a weighted graph in the distributed Congest model. For graphs with non-negative integer edge weights (including zero weights) we build on a recent pipelined algorithm [1] to obtain aÕ(λ 1/4 · n 5/4 )-round bound for graphs with edge-weight at most λ, andÕ(n · △ 1/3 )-round bound for shortest path distances at most △. Additionally, we simplify some of the procedures in the earlier APSP algorithms for non-negative edge weights in [8, 2] . We also present results for computing h-hop shortest paths and shortest paths from k given sources. In other results, we present a randomized exact APSP algorithm for graphs with arbitrary edge weights that runs inÕ(n 4/3 ) rounds w.h.p. in n, which improves the previous bestÕ(n 3/2 ) bound, which is deterministic. We also present anÕ(n/ǫ 2 )-round deterministic (1 + ǫ) approximation algorithm for graphs with non-negative poly(n) integer weights (including zero edge-weights), improving results in [13, 11] that hold only for positive integer weights.
In a companion paper REF we build on the pipelined algorithm we present here to obtain an algorithm for non-negative integer edge-weights (including zero-weighted edges) that runs inÕ(n△ 1/3 ) rounds where the shortest path distances are at most △ and inÕ(n 5/4 λ 1/4 ) rounds when the edge weights are bounded by λ.
53041634
New and Simplified Distributed Algorithms for Weighted All Pairs Shortest Paths
{ "venue": "ArXiv", "journal": "ArXiv", "mag_field_of_study": [ "Mathematics", "Computer Science" ] }
We present techniques for speeding up the test-time evaluation of large convolutional networks, designed for object recognition tasks. These models deliver impressive accuracy, but each image evaluation requires millions of floating point operations, making their deployment on smartphones and Internet-scale clusters problematic. The computation is dominated by the convolution operations in the lower layers of the model. We exploit the redundancy present within the convolutional filters to derive approximations that significantly reduce the required computation. Using large state-of-the-art models, we demonstrate speedups of convolutional layers on both CPU and GPU by a factor of 2×, while keeping the accuracy within 1% of the original model.
REF exploited the linear structure of the neural network by finding an appropriate low-rank approximation of the parameters and keeping the accuracy within 1% of the original model.
7340116
Exploiting Linear Structure Within Convolutional Networks for Efficient Evaluation
{ "venue": "ArXiv", "journal": "ArXiv", "mag_field_of_study": [ "Computer Science" ] }
Dynamic specification mining involves discovering software behavior from traces for the purpose of program comprehension and bug detection. However, mining program behavior from execution traces is difficult for concurrent/distributed programs. Specifically, the inherent partial order relationships among events occurring across processes pose a big challenge to specification mining. In this paper, we propose a framework for mining partial orders so as to understand concurrent program behavior. Our miner takes in a set of concurrent program traces, and produces a message sequence graph (MSG) to represent the concurrent program behavior. An MSG represents a graph where the nodes of the graph are partial orders, represented as Message Sequence Charts. Mining an MSG allows us to understand concurrent program behaviors since the nodes of the MSG depict important "phases" or "interaction snippets" involving several concurrently executing processes. To demonstrate the power of this technique, we conducted experiments on mining behaviors of several fairly complex distributed systems. We show that our miner can produce the corresponding MSGs with both high precision and recall.
The work in REF uses mined message sequence charts from the partial ordering of execution traces of concurrent programs.
7955343
Mining message sequence graphs
{ "venue": "ICSE '11", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
Abstract-Time-synchronized channel hopping (TSCH) is currently the most efficient solution for collision-free interferenceavoiding communications in ad hoc wireless networks, such as wireless sensor networks, vehicular networks, and networks of robots or drones. However, all variants of TSCH require some form of centralized coordination to maintain the time-frequency slotting mechanism. This leads to slow convergence to steady state and moderate time-frequency slot utilization, particularly under node churn or mobility. We propose decentralized timesynchronized channel swapping (DT-SCS), which is a novel protocol for medium access control (MAC) in ad hoc wireless networks. Under the proposed protocol, nodes first converge to synchronous beacon packet transmissions across all available channels at the physical layer, with a balanced number of nodes in each channel. This is done by the novel coupling of distributed synchronization and desynchronization mechanisms-which are based on the concept of pulse-coupled oscillators-at the MAC layer. Decentralized channel swapping can then take place via peer-to-peer swap requests/acknowledgments made between concurrent transmitters in neighboring channels. We benchmark the convergence and network throughput of DT-SCS, TSCH, and the efficient multichannel MAC protocol (seen as the state of the art in decentralized, interference-avoiding, and multichannel MAC protocols) under simulated packet losses at the MAC layer. Moreover, performance results via a Contiki-based deployment on TelosB motes reveal that DT-SCS comprises an excellent candidate for decentralized multichannel MAC-layer coordination by providing for quick convergence to steady state, high bandwidth utilization under interference and hidden nodes, as well as high connectivity. Index Terms-Ad hoc networks, channel hopping, decentralized medium access control (MAC), pulse-coupled oscillators (PCOs).
A decentralized time-synchronized channel swapping (DT-SCS) protocol is presented in REF to overcome the shortcomings of time-synchronized channel hopping (TSCH) in ad hoc networks.
19043666
Decentralized Time-Synchronized Channel Swapping for Ad Hoc Wireless Networks
{ "venue": "IEEE Transactions on Vehicular Technology", "journal": "IEEE Transactions on Vehicular Technology", "mag_field_of_study": [ "Computer Science" ] }
Stephann Makri: S.Makri@ucl.ac.uk. Tel. +44 20 7679 5242. Fax +44 8456 382 573. Information-seeking is important for lawyers, who have access to many dedicated electronic resources. However there is considerable scope for improving the design of these resources to better support information-seeking. One way of informing design is to use information-seeking models as theoretical lenses to analyse users' behaviour with existing systems. However many models, including those informed by studying lawyers, analyse information-seeking at a high level of abstraction and are only likely to lead to broad-scoped design insights. We illustrate that one potentially useful (and lowerlevel) model is Ellis's -by using it as a lens to analyse and make design suggestions based on the information-seeking behaviour of twenty-seven academic lawyers, who were asked to think aloud whilst using electronic legal resources to find information for their work. We identify similar information-seeking behaviours to those originally found by Ellis and his colleagues in scientific domains, along with several that were not identified in previous studies such as 'updating' (which we believe is particularly pertinent to legal information-seeking). We also present a refinement of Ellis's model based on the identification of several levels that the behaviours were found to operate at and the identification of sets of mutually exclusive subtypes of behaviours. Keywords: information-seeking models, HCI, legal, attorney, digital library, behaviour, Ellis Information-seeking is an important part of lawyers' work and unlike many other professions, the legal profession has access to many dedicated electronic resources. Notable examples are the high-profile commercial platforms LexisNexis and Westlaw (which are commonly referred to by the legal profession as legal databases, but can also be considered to be digital law libraries). Despite access to these resources, lawyers often find legal information-seeking difficult, making them interesting to study. Much of the problem might lie with the fact that digital law libraries have traditionally been regarded as difficult to use. In one of the few user-centred studies on digital law libraries, Vollaro & Hawkins (1986) conducted interviews with patent attorneys at the AT&T Bell Laboratories, focusing on their information search behaviour. Nearly all attorneys mentioned difficulty in finding appropriate search terms and remembering the special features of each resource, especially when use was infrequent. Other problems were not knowing when all possible avenues had been pursued and forgetting commands. In another user-centred study, Yuan (1997) monitored the LexisNexis Quicklaw searches of a group of law students over a year and found the experience did not result in reduced errors or increased error-recovery. Yuan also found that some commands were rarely or never used, but law students were able to accomplish many tasks by knowing only a basic set of commands. Whilst this may allow lawyers to 'get by,' we argue that there is a need to improve the design of these resources in order to better support lawyers and their work. In order to improve the design of digital law libraries, and closely related to one of the motivations of this special issue, we suggest the need to develop a better understanding of lawyers' informationseeking behaviour with existing systems. One way of achieving this understanding is by using theoretical information-seeking models as lenses for the identification, analysis and description of their behaviour. In the remainder of this paper we highlight that many models (including Leckie et al.'s (1996) model which examines the behaviour of lawyers) analyse information-seeking at a high level of abstraction. We suggest that whilst this may lead to broad design insights for interactive systems, richer data and detailed design insights can be gained by using models that analyse data at a lower-level of abstraction.
Makri et al REF extended this work, focusing on the information behaviours observed within the legal profession.
2305779
Investigating the Information-Seeking Behaviour of Academic Lawyers: From Ellis’s Model to Design
{ "venue": "Information Processing and Management", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
Despite our growing reliance on mobile phones for a wide range of daily tasks, their operation remains largely opaque. A number of previous studies have addressed elements of this problem in a partial fashion, trading off analytic comprehensiveness and deployment scale. We overcome the barriers to largescale deployment (e.g., requiring rooted devices) and comprehensiveness of previous efforts by taking a novel approach that leverages the VPN API on mobile devices to design Haystack, an in-situ mobile measurement platform that operates exclusively on the device, providing full access to the device's network traffic and local context without requiring root access. We present the design of Haystack and its implementation in an Android app that we deploy via standard distribution channels. Using data collected from 450 users of the app, we exemplify the advantages of Haystack over the state of the art and demonstrate its seamless experience even under demanding conditions. We also demonstrate its utility to users and researchers in characterizing mobile traffic and privacy risks. Similar to previous approaches [52], Haystack leverages Android's standard VPN interface to capture outbound packets from applications. However, rather than tunneling the packets to a remote VPN server for inspection, Haystack intercepts, inspects, and forwards the user's traffic to its intended destination. This approach gives us raw packet-level access to outbound packets as well as flow-level access to incoming traffic without modifying the network path, and without requiring permissions beyond those needed by the VPN interface. Haystack therefore has the ability to monitor network activity in the proper context by operating locally on the device. For example, a TCP connection can be associated with a specific DNS lookup and both can be coupled with the originating application. Further, we design Haystack to be extensible with new analyses and measurements added over time (e.g., by adding new protocol parsers and by supporting advanced measurement methods such as reactive measurements [10]), and new features to attract and educate users (e.g., ad block, malware detection, privacy leak prevention and network troubleshooting). Haystack is publicly available for anyone to install on Google Play and has been installed by 450 users to date [40] . We discuss the design and implementation of Haystack in §3 and §4, and evaluate its performance and resource use in §5. Our tests show that Haystack delivers sufficient throughput (26-55 Mbps) at low latency overhead (2-3 ms) to drive high-performance and delay-sensitive applications such as HD video streaming and VoIP without noticeable performance degradation for the user. While we consider our Haystack implementation prototypical in some respects (such as UI usability for nontechnical users), it has already provided interesting insights into app usage in the wild: in §6 we present preliminary findings about the adoption of encryption techniques, report on local-network traffic interacting with IoT devices, study app provenance and the use of thirdparty tracker services, and give an outlook on potential future applications.
State of the art work in REF proposes Haystack system which aims to allow unobtrusive and comprehensive monitoring of network communications on mobile phones entirely from user space.
4308476
Haystack: A Multi-Purpose Mobile Vantage Point in User Space
{ "venue": null, "journal": "arXiv: Networking and Internet Architecture", "mag_field_of_study": [ "Computer Science" ] }
We present a model that uses a single first-person image to generate an egocentric basketball motion sequence in the form of a 12D camera configuration trajectory, which encodes a player's 3D location and 3D head orientation throughout the sequence. To do this, we first introduce a future convolutional neural network (CNN) that predicts an initial sequence of 12D camera configurations, aiming to capture how real players move during a one-on-one basketball game. We also introduce a goal verifier network, which is trained to verify that a given camera configuration is consistent with the final goals of real one-on-one basketball players. Next, we propose an inverse synthesis procedure to synthesize a refined sequence of 12D camera configurations that (1) sufficiently matches the initial configurations predicted by the future CNN, while (2) maximizing the output of the goal verifier network. Finally, by following the trajectory resulting from the refined camera configuration sequence, we obtain the complete 12D motion sequence. Our model generates realistic basketball motion sequences that capture the goals of real players, outperforming standard deep learning approaches such as recurrent neural networks (RNNs), long short-term memory networks (LSTMs), and generative adversarial networks (GANs).
Bertasius et al. REF addressed the motion planning problem for generating an egocentric basketball motion sequence in the form of a 12-d camera configuration trajectory.
3665577
Egocentric Basketball Motion Planning from a Single First-Person Image
{ "venue": "2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition", "journal": "2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition", "mag_field_of_study": [ "Computer Science" ] }
ABSTRACT Finding genomic distance based on gene order is a classic problem in genome rearrangements. Efficient exact algorithms for genomic distances based on inversions and/or translocations have been found but are complicated by special cases, rare in simulations and empirical data. We seek a universal operation underlying a more inclusive set of evolutionary operations and yielding a tractable genomic distance with simple mathematical form. Results: We study a universal double-cut-and-join operation that accounts for inversions, translocations, fissions and fusions, but also produces circular intermediates which can be reabsorbed. The genomic distance, computable in linear time, is given by the number of breakpoints minus the number of cycles (b − c) in the comparison graph of the two genomes; the number of hurdles does not enter into it. Without changing the formula, we can replace generation and re-absorption of a circular intermediate by a generalized transposition, equivalent to a block interchange, with weight two. Our simple algorithm converts one multi-linear chromosome genome to another in the minimum distance.
To simplify the existing algorithms, Yancopoulos et al. invented the double cut and join operator, which can simulate reversals and block interchanges (a more generalized form of a transposition), resulting in a simple and efficient algorithm REF .
9763380
Efficient sorting of genomic permutations by translocation , inversion and block interchange
{ "venue": "Bioinformatics", "journal": "Bioinformatics", "mag_field_of_study": [ "Medicine", "Computer Science" ] }
Abstract--Identifying anomalies rapidly and accurately is critical to the efficient operation of large computer networks. Accurately characterizing important classes of anomalies greatly facilitates their identification; however, the subtleties and complexities of anomalous traffic can easily confound this process. In this paper we report results of signal analysis of four classes of network traffic anomalies: outages, flash crowds, attacks and measurement failures. Data for this study consists of IP flow and SNMP measurements collected over a six month period at the border router of a large university. Our results show that wavelet filters are quite effective at exposing the details of both ambient and anomalous traffic. Specifically, we show that a pseudo-spline filter tuned at specific aggregation levels will expose distinct characteristics of each class of anomaly. We show that an effective way of exposing anomalies is via the detection of a sharp increase in the local variance of the filtered data. We evaluate traffic anomaly signals at different points within a network based on topological distance from the anomaly source or destination. We show that anomalies can be exposed effectively even when aggregated with a large amount of additional traffic. We also compare the difference between the same traffic anomaly signals as seen in SNMP and IP flow data, and show that the more coarse-grained SNMP data can also be used to expose anomalies effectively.
REF show that wavelet filters are quite effective at exposing the details and characteristics of ambient and anomalous traffic.
10473242
A signal analysis of network traffic anomalies
{ "venue": "IMW '02", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
Gossip-based communication protocols are appealing in large-scale distributed applications such as information dissemination, aggregation, and overlay topology management. This paper factors out a fundamental mechanism at the heart of all these protocols: the peer-sampling service. In short, this service provides every node with peers to gossip with. We promote this service to the level of a first-class abstraction of a large-scale distributed system, similar to a name service being a first-class abstraction of a local-area system. We present a generic framework to implement a peersampling service in a decentralized manner by constructing and maintaining dynamic unstructured overlays through gossiping membership information itself. Our framework generalizes existing approaches and makes it easy to discover new ones. We use this framework to empirically explore and compare several implementations of the peer-sampling service. Through extensive simulation experiments we show that-although all protocols provide a good quality uniform random stream of peers to each node locally-traditional theoretical assumptions about the randomness of the This paper is a revised and extended version of . This work was completed while M. Jelasity was with the University of Bologna and S. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or direct commercial advantage and that copies show this notice on the first page or initial screen of a display along with the full citation. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, to redistribute to lists, or to use any component of this work in other works requires prior specific permission and/or a fee. Permissions may be requested from Publications Dept., ACM, Inc., 2 Penn Plaza, Suite 701, New York, NY 10121-0701 USA, fax +1 ( unstructured overlays as a whole do not hold in any of the instances. We also show that different design decisions result in severe differences from the point of view of two crucial aspects: load balancing and fault tolerance. Our simulations are validated by means of a wide-area implementation.
Nonetheless, the realization of the bottom level by the peer sampling service REF ) results in a uniform communication overhead between nodes.
6266183
Gossip-based peer sampling
{ "venue": "TOCS", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
Abstract-Traditional pattern recognition generally involves two tasks: unsupervised clustering and supervised classification. When class information is available, fusing the advantages of both clustering learning and classification learning into a single framework is an important problem worthy of study. To date, most algorithms generally treat clustering learning and classification learning in a sequential or two-step manner, i.e., first execute clustering learning to explore structures in data, and then perform classification learning on top of the obtained structural information. However, such sequential algorithms can not always guarantee the simultaneous optimality for both clustering and classification learning. In fact, the clustering learning in these algorithms just aids the subsequent classification learning and does not benefit from the latter. To overcome this problem, a simultaneous learning framework for clustering and classification (SCC) is presented in this paper. SCC aims to achieve three goals: (1) acquiring the robust classification and clustering simultaneously; (2) designing an effective and transparent classification mechanism; (3) revealing the underlying relationship between clusters and classes. To this end, with the Bayesian theory and the cluster posterior probabilities of classes, we define a single objective function to which the clustering process is directly embedded. By optimizing this objective function, the effective and robust clustering and classification results are achieved simultaneously. Experimental results on both synthetic and real-life datasets show that SCC achieves promising classification and clustering results at one time.
However, clustering task does not benefit from label information REF .
15150515
A simultaneous learning framework for clustering and classification
{ "venue": "Pattern Recognit.", "journal": "Pattern Recognit.", "mag_field_of_study": [ "Computer Science", "Mathematics" ] }
In this paper we describe our findings from a field study that was conducted at the Vancouver Aquarium to investigate how visitors interact with a large interactive table exhibit using multi-touch gestures. Our findings show that the choice and use of multi-touch gestures are influenced not only by general preferences for certain gestures but also by the interaction context and social context they occur in. We found that gestures are not executed in isolation but linked into sequences where previous gestures influence the formation of subsequent gestures. Furthermore, gestures were used beyond the manipulation of media items to support social encounters around the tabletop exhibit. Our findings indicate the importance of versatile many-to-one mappings between gestures and their actions that, other than one-to-one mappings, can support fluid transitions between gestures as part of sequences and facilitate social information exploration.
To show the importance of user experience designers taking into consideration various aspects when defining gestures, Hinrichs and Carpendale found in a study using interactive tabletops, that the choice and use of multi-touch gestures are influenced by the action and social context in which these gestures occur REF .
15142614
Gestures in the wild: studying multi-touch gesture sequences on interactive tabletop exhibits
{ "venue": "CHI", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
We propose novel controller synthesis techniques for probabilistic systems modelled using stochastic two-player games: one player acts as a controller, the second represents its environment, and probability is used to capture uncertainty arising due to, for example, unreliable sensors or faulty system components. Our aim is to generate robust controllers that are resilient to unexpected system changes at runtime, and flexible enough to be adapted if additional constraints need to be imposed. We develop a permissive controller synthesis framework, which generates multi-strategies for the controller, offering a choice of control actions to take at each time step. We formalise the notion of permissivity using penalties, which are incurred each time a possible control action is disallowed by a multi-strategy. Permissive controller synthesis aims to generate a multi-strategy that minimises these penalties, whilst guaranteeing the satisfaction of a specified system property. We establish several key results about the optimality of multi-strategies and the complexity of synthesising them. Then, we develop methods to perform permissive controller synthesis using mixed integer linear programming and illustrate their effectiveness on a selection of case studies.
In REF , the synthesis of multi-strategies for MDPs is studied.
2038480
Permissive Controller Synthesis for Probabilistic Systems
{ "venue": "Logical Methods in Computer Science, Volume 11, Issue 2 (June 30, 2015) lmcs:1576", "journal": null, "mag_field_of_study": [ "Computer Science", "Mathematics" ] }
Roundoff errors cannot be avoided when implementing numerical programs with finite precision. The ability to reason about rounding is especially important if one wants to explore a range of potential representations, for instance, for FPGAs or custom hardware implementations. This problem becomes challenging when the program does not employ solely linear operations as non-linearities are inherent to many interesting computational problems in real-world applications. Existing solutions to reasoning possibly lead to either inaccurate bounds or high analysis time in the presence of nonlinear correlations between variables. Furthermore, while it is easy to implement a straightforward method such as interval arithmetic, sophisticated techniques are less straightforward to implement in a formal setting. Thus there is a need for methods that output certificates that can be formally validated inside a proof assistant. We present a framework to provide upper bounds on absolute roundoff errors of floating-point nonlinear programs. This framework is based on optimization techniques employing semidefinite programming and sums of squares certificates, which can be checked inside the Coq theorem prover to provide formal roundoff error bounds for polynomial programs. Our tool covers a wide range of nonlinear programs, including polynomials and transcendental operations as well as conditional statements. We illustrate the efficiency and precision of this tool on non-trivial programs coming from biology, optimization, and space control. Our tool produces more accurate error bounds for 23% of all programs and yields better performance in 66% of all programs.
Real2Float REF computes certified bounds for round-off errors by using an optimization technique employing semidefinite programming and sum of square certificates.
1363666
Certified Roundoff Error Bounds Using Semidefinite Programming
{ "venue": "TOMS", "journal": null, "mag_field_of_study": [ "Mathematics", "Computer Science" ] }
Ranking documents or sentences according to both topic and sentiment relevance should serve a critical function in helping users when topics and sentiment polarities of the targeted text are not explicitly given, as is often the case on the web. In this paper, we propose several sentiment information retrieval models in the framework of probabilistic language models, assuming that a user both inputs query terms expressing a certain topic and also specifies a sentiment polarity of interest in some manner. We combine sentiment relevance models and topic relevance models with model parameters estimated from training data, considering the topic dependence of the sentiment. Our experiments prove that our models are effective.
Eguchi and Lavrenko REF proposed several sentiment information retrieval models in the framework of probabilistic language models, assuming that users are willing to specify the sentiment polarity of interest when they input query terms.
8203481
Sentiment Retrieval Using Generative Models
{ "venue": "Conference On Empirical Methods In Natural Language Processing", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
Abstract-Social networks allow rapid spread of ideas and innovations while the negative information can also propagate widely. When the cascades with different opinions reaching the same user, the cascade arriving first is the most likely to be taken by the user. Therefore, once misinformation or rumor is detected, a natural containment method is to introduce a positive cascade competing against the rumor. Given a budget k, the rumor blocking problem asks for k seed users to trigger the spread of the positive cascade such that the number of the users who are not influenced by rumor can be maximized. The prior works have shown that the rumor blocking problem can be approximated within a factor of (1 − 1/e − δ) by a classic greedy algorithm combined with Monte Carlo simulation with the running time of O( ), where n and m are the number of users and edges, respectively. Unfortunately, the Monte-Carlo-simulationbased methods are extremely time consuming and the existing algorithms either trade performance guarantees for practical efficiency or vice versa. In this paper, we present a randomized algorithm which runs in O( km ln n δ 2 ) expected time and provides a (1 − 1/e − δ)-approximation with a high probability. The experimentally results on both the real-world and synthetic social networks have shown that the proposed randomized rumor blocking algorithm is much more efficient than the state-of-theart method and it is able to find the seed nodes which are effective in limiting the spread of rumor.
In REF , the problem of rumor blocking is addressed under the competitive IC model and a randomized algorithm is developed for the selection of the seed set able to yield the maximum reduction in the number of bad-infected nodes.
18445077
An efficient randomized algorithm for rumor blocking in online social networks
{ "venue": "IEEE INFOCOM 2017 - IEEE Conference on Computer Communications", "journal": "IEEE INFOCOM 2017 - IEEE Conference on Computer Communications", "mag_field_of_study": [ "Computer Science" ] }
We introduce Torsk, a structured peer-to-peer low-latency anonymity protocol. Torsk is designed as an interoperable replacement for the relay selection and directory service of the popular Tor anonymity network, that decreases the bandwidth cost of relay selection and maintenance from quadratic to quasilinear while introducing no new attacks on the anonymity provided by Tor, and no additional delay to connections made via Tor. The resulting bandwidth savings make a modest-sized Torsk network significantly cheaper to operate, and allows low-bandwidth clients to join the network. Unlike previous proposals for P2P anonymity schemes, Torsk does not require all users to relay traffic for others. Torsk utilizes a combination of two P2P lookup mechanisms with complementary strengths in order to avoid attacks on the confidentiality and integrity of lookups. We show by analysis that previously known attacks on P2P anonymity schemes do not apply to Torsk, and report on experiments conducted with a 336-node wide-area deployment of Torsk, demonstrating its efficiency and feasibility.
Torsk REF , in particular, utilizes a combination of two p2p lookup mechanisms, in order to preserve the confidentiality and integrity of lookups.
13970612
Scalable onion routing with torsk
{ "venue": "ACM Conference on Computer and Communications Security", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
There is rising interest in vector-space word embeddings and their use in NLP, especially given recent methods for their fast estimation at very large scale. Nearly all this work, however, assumes a single vector per word type-ignoring polysemy and thus jeopardizing their usefulness for downstream tasks. We present an extension to the Skip-gram model that efficiently learns multiple embeddings per word type. It differs from recent related work by jointly performing word sense discrimination and embedding learning, by non-parametrically estimating the number of senses per word type, and by its efficiency and scalability. We present new state-of-the-art results in the word similarity in context task and demonstrate its scalability by training with one machine on a corpus of nearly 1 billion tokens in less than 6 hours.
REF also extends skip-gram with multiple prototype embeddings where the number of senses per word is determined by a non-parametric approach.
15251438
Efficient Non-parametric Estimation of Multiple Embeddings per Word in Vector Space
{ "venue": "ArXiv", "journal": "ArXiv", "mag_field_of_study": [ "Computer Science", "Mathematics" ] }
We consider molecular communication, with information conveyed in the time of release of molecules. The main contribution of this paper is the development of a theoretical foundation for such a communication system. Specifically, we develop the additive inverse Gaussian (IG) noise channel model: a channel in which the information is corrupted by noise with an inverse Gaussian distribution. We show that such a channel model is appropriate for molecular communication in fluid media -when propagation between transmitter and receiver is governed by Brownian motion and when there is positive drift from transmitter to receiver. Taking advantage of the available literature on the IG distribution, upper and lower bounds on channel capacity are developed, and a maximum likelihood receiver is derived. Theory and simulation results are presented which show that such a channel does not have a single quality measure analogous to signal-to-noise ratio in the AWGN channel. It is also shown that the use of multiple molecules leads to reduced error rate in a manner akin to diversity order in wireless communications. Finally, we discuss some open problems in molecular communications that arise from the IG system model.
In REF , it is shown that additive inverse Gaussian noise can be used to model the molecular timing channel where the information is encoded into the release time of the molecules into the fluid medium with drift.
8543405
Molecular communication in fluid media: The additive inverse Gaussian noise channel
{ "venue": "ArXiv", "journal": "ArXiv", "mag_field_of_study": [ "Computer Science", "Mathematics" ] }
The gap between the raw data from various data sources and the diverse intelligent applications has been an obstacle in the field of events analysis in online social networks. Most existing analysis systems focus on data from a certain single online social network platform and a limited range of analysis applications. To comprehensively understand events, the sources of data usually include multiple online social network platforms and different existing corpora. Thus, it is necessary to build a bridge to handle the online social data from different sources and support various analysis application requirements. In this paper, a unified semantic model for events analysis is proposed. The model contains well-designed classes and properties to tackle the lack of unified representation, and provenance information is also taken into consideration. The reasoning is supported to check the consistency of the data and to discover hidden knowledge such as tacit classification and implicit relationships. The schema mapping and data transformation methods are provided to handle the heterogeneous data from various online social network platforms and datasets. The design of the cross-media event analysis system is also presented. The comparison shows the advantages of this paper. The case study shows the applicability and effectiveness of our model and system. Online social network analysis, data platform, data model, semantic web.
Fang et al. REF also proposed an ontology to aggregate data from different social network sites for event analysis purposes.
85499412
A Unified Semantic Model for Cross-Media Events Analysis in Online Social Networks
{ "venue": "IEEE Access", "journal": "IEEE Access", "mag_field_of_study": [ "Computer Science" ] }
We propose a unifying algorithm for non-smooth non-convex optimization. The algorithm approximates the objective function by a convex model function and finds an approximate (Bregman) proximal point of the convex model. This approximate minimizer of the model function yields a descent direction, along which the next iterate is found. Complemented with an Armijo-like line search strategy, we obtain a flexible algorithm for which we prove (subsequential) convergence to a stationary point under weak assumptions on the growth of the model function error. Special instances of the algorithm with a Euclidean distance function are, for example, Gradient Descent, Forward-Backward Splitting, ProxDescent, without the common requirement of a "Lipschitz continuous gradient". In addition, we consider a broad class of Bregman distance functions (generated by Legendre functions) replacing the Euclidean distance. The algorithm has a wide range of applications including many linear and non-linear inverse problems in image processing and machine learning.
An extension of their framework to a general non-smooth first-order oracle and a flexible choice of Bregman distances was proposed in REF .
20324539
Non-smooth Non-convex Bregman Minimization: Unification and new Algorithms
{ "venue": "ArXiv", "journal": "ArXiv", "mag_field_of_study": [ "Computer Science", "Mathematics" ] }