src
stringlengths
100
132k
tgt
stringlengths
10
710
paper_id
stringlengths
3
9
title
stringlengths
9
254
discipline
dict
The concept of submodularity plays a vital role in combinatorial optimization. In particular, many important optimization problems can be cast as submodular maximization problems, including maximum coverage, maximum facility location and max cut in directed/undirected graphs. In this paper we present the first known approximation algorithms for the problem of maximizing a non-decreasing submodular set function subject to multiple linear constraints. Given a d-dimensional budget vectorL, for some d ≥ 1, and an oracle for a non-decreasing submodular set function f over a universe U , where each element e ∈ U is associated with a d-dimensional cost vector, we seek a subset of elements S ⊆ U whose total cost is at mostL, such that f (S) is maximized. We develop a framework for maximizing submodular functions subject to d linear constraints that yields a (1 − ε)(1 − e −1 )-approximation to the optimum for any ε > 0, where d > 1 is some constant. Our study is motivated by a variant of the classical maximum coverage problem that we call maximum coverage with multiple packing constraints. We use our framework to obtain the same approximation ratio for this problem. To the best of our knowledge, this is the first time the theoretical bound of 1 − e −1 is (almost) matched for both of these problems.
A framework was proposed in REF for maximizing a submodular function subject to a d-knapsack constraint, which yields a (1 − e −1 − )-approximation for any > 0.
2987767
Maximizing Submodular Set Functions Subject to Multiple Linear Constraints
{ "venue": "SODA", "journal": null, "mag_field_of_study": [ "Mathematics", "Computer Science" ] }
Quantized deep neural networks (QDNNs) are attractive due to their much lower memory storage and faster inference speed than their regular full precision counterparts. To maintain the same performance level especially at low bit-widths, QDNNs must be retrained. Their training involves piecewise constant activation functions and discrete weights, hence mathematical challenges arise. We introduce the notion of coarse gradient and propose the blended coarse gradient descent (BCGD) algorithm, for training fully quantized neural networks. Coarse gradient is generally not a gradient of any function but an artificial ascent direction. The weight update of BCGD goes by coarse gradient correction of a weighted average of the full precision weights and their quantization (the so-called blending), which yields sufficient descent in the objective value and thus accelerates the training. Our experiments demonstrate that this simple blending technique is very effective for quantization at extremely low bit-width such as binarization. In full quantization of ResNet-18 for ImageNet classification task, BCGD gives 64.36% top-1 accuracy with binary weights across all layers and 4-bit adaptive activation. If the weights in the first and last layers are kept in full precision, this number increases to 65.46%. As theoretical justification, we show convergence analysis of coarse gradient descent for a two-layer neural network model with Gaussian input data, and prove that the expected coarse gradient correlates positively with the underlying true gradient. Keywords weight/activation quantization · blended coarse gradient descent · sufficient descent property · deep neural networks Mathematics Subject Classification (2010) 90C35, 90C26, 90C52, 90C90.
Also a blended coarse gradient descent method REF is introduced to train fully quantized DNNs in weights and activation functions, and overcome vanishing gradients.
52015115
Blended Coarse Gradient Descent for Full Quantization of Deep Neural Networks
{ "venue": "ArXiv", "journal": "ArXiv", "mag_field_of_study": [ "Mathematics", "Computer Science" ] }
In this paper we introduce a new method for text detection in natural images. The method comprises two contributions: First, a fast and scalable engine to generate synthetic images of text in clutter. This engine overlays synthetic text to existing background images in a natural way, accounting for the local 3D scene geometry. Second, we use the synthetic images to train a Fully-Convolutional Regression Network (FCRN) which efficiently performs text detection and bounding-box regression at all locations and multiple scales in an image. We discuss the relation of FCRN to the recently-introduced YOLO detector, as well as other end-toend object detection systems based on deep learning. The resulting detection network significantly out performs current methods for text detection in natural images, achieving an F-measure of 84.2% on the standard ICDAR 2013 benchmark. Furthermore, it can process 15 images per second on a GPU.
Gupta et al. REF introduce a Fully-Convolutional Regression Network to jointly achieve text detection and bounding-box regression at multiple image scales.
206593628
Synthetic Data for Text Localisation in Natural Images
{ "venue": "2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)", "journal": "2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)", "mag_field_of_study": [ "Computer Science" ] }
Abstract-State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet [1] and Fast R-CNN [2] have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position. The RPN is trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. We further merge RPN and Fast R-CNN into a single network by sharing their convolutional features-using the recently popular terminology of neural networks with 'attention' mechanisms, the RPN component tells the unified network where to look. For the very deep VGG-16 model [3], our detection system has a frame rate of 5 fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007, and MS COCO datasets with only 300 proposals per image. In ILSVRC and COCO 2015 competitions, Faster R-CNN and RPN are the foundations of the 1st-place winning entries in several tracks. Code has been made publicly available. Region proposal methods typically rely on inexpensive features and economical inference schemes. Selective Search [4] , one of the most popular methods, greedily merges superpixels based on engineered low-level features. Yet when compared to efficient detection networks [2], Selective Search is an order of magnitude slower, at 2 seconds per image in a CPU implementation. EdgeBoxes [6] currently provides the best tradeoff between proposal quality and speed, at 0.2 seconds per image. Nevertheless, the region proposal step still consumes as much running time as the detection network. One may note that fast region-based CNNs take advantage of GPUs, while the region proposal methods used in research are implemented on the CPU, making such runtime comparisons inequitable. An obvious way to accelerate proposal computation is to re-implement it for the GPU. This may be an effective engineering solution, but re-implementation ignores the down-stream detection network and therefore misses important opportunities for sharing computation. In this paper, we show that an algorithmic change-computing proposals with a deep convolutional neural network-leads to an elegant and effective solution where proposal computation is nearly cost-free given the detection network's computation. To this end, we introduce novel Region Proposal Networks (RPNs) that share convolutional layers with state-of-the-art object detection networks [1], [2] . By sharing convolutions at test-time, the marginal cost for computing proposals is small (e.g., 10 ms per image). Our observation is that the convolutional feature maps used by region-based detectors, like Fast R-CNN, can also be used for generating region proposals. On top of these convolutional features, we construct an RPN by adding a few additional convolutional layers that simultaneously regress region bounds and objectness scores at each location on a regular grid. The RPN is thus a kind of fully convolutional network (FCN) [7] and can be trained end-to-end specifically for the task for generating detection proposals. RPNs are designed to efficiently predict region proposals with a wide range of scales and aspect ratios. In contrast to prevalent methods [1], [2], [8], [9] that use pyramids of images (Fig. 1a) or pyramids of filters (Fig. 1b) , we introduce novel "anchor" boxes that serve as references at multiple scales and aspect ratios. Our scheme can be thought of as a pyramid of regression references (Fig. 1c) , which avoids enumerating images or filters of multiple scales or aspect ratios. This model performs well when trained and tested using single-scale images and thus benefits running speed. To unify RPNs with Fast R-CNN [2] object detection networks, we propose a training scheme that alternates S. Ren is with
The next step, which led to real-time object detection with region proposal networks, was Faster R-CNN REF .
10328909
Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks
{ "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "mag_field_of_study": [ "Computer Science", "Medicine" ] }
Abstract. During face-to-face interactions, listeners use backchannel feedback such as head nods as a signal to the speaker that the communication is working and that they should continue speaking. Predicting these backchannel opportunities is an important milestone for building engaging and natural virtual humans. In this paper we show how sequential probabilistic models (e.g., Hidden Markov Model or Conditional Random Fields) can automatically learn from a database of human-tohuman interactions to predict listener backchannels using the speaker multimodal output features (e.g., prosody, spoken words and eye gaze). The main challenges addressed in this paper are automatic selection of the relevant features and optimal feature representation for probabilistic models. For prediction of visual backchannel cues (i.e., head nods), our prediction model shows a statistically significant improvement over a previously published approach based on hand-crafted rules.
In REF , sequential probabilistic models were used to select multimodal features from a speaker (e.g. prosody, gaze and spoken words) to predict visual back-channel cues (e.g. head nods).
5652593
Predicting listener backchannels: A probabilistic multimodal approach
{ "venue": "In In proceedings of Intelligent Virtual Agents (IVA", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
Abstract. Over the last decades, several billion Web pages have been made available on the Web. The ongoing transition from the current Web of unstructured data to the Web of Data yet requires scalable and accurate approaches for the extraction of structured data in RDF (Resource Description Framework) from these websites. One of the key steps towards extracting RDF from text is the disambiguation of named entities. While several approaches aim to tackle this problem, they still achieve poor accuracy. We address this drawback by presenting AGDIS-TIS, a novel knowledge-base-agnostic approach for named entity disambiguation. Our approach combines the Hypertext-Induced Topic Search (HITS) algorithm with label expansion strategies and string similarity measures. Based on this combination, AGDISTIS can efficiently detect the correct URIs for a given set of named entities within an input text. We evaluate our approach on eight different datasets against state-of-theart named entity disambiguation frameworks. Our results indicate that we outperform the state-of-the-art approach by up to 29% F-measure.
While the last two examples address only specific ontologies, AGDISTIS REF is an approach for name entity disambiguation (NED) that can use any ontology.
14301767
AGDISTIS - Graph-Based Disambiguation of Named Entities Using Linked Data
{ "venue": "International Semantic Web Conference", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
Since its inception, the blockchain technology has shown promising application prospects. From the initial cryptocurrency to the current smart contract, blockchain has been applied to many fields. Although there are some studies on the security and privacy issues of blockchain, there lacks a systematic examination on the security of blockchain systems. In this paper, we conduct a systematic study on the security threats to blockchain and survey the corresponding real attacks by examining popular blockchain systems. We also review the security enhancement solutions for blockchain, which could be used in the development of various blockchain systems, and suggest some future directions to stir research efforts into this area.
For instance, Li et al. REF overview the security attacks on the blockchain platforms & summarise the security enhancements.
3628110
A Survey on the Security of Blockchain Systems
{ "venue": "Li X, Jiang P, Chen T, Luo X, Wen Q. A survey on the security of blockchain systems, Future Generation Computer Systems (2017)", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
AbstractÐThis paper is an attempt to understand the processes by which software ages. We define code to be aged or decayed if its structure makes it unnecessarily difficult to understand or change and we measure the extent of decay by counting the number of faults in code in a period of time. Using change management data from a very large, long-lived software system, we explore the extent to which measurements from the change history are successful in predicting the distribution over modules of these incidences of faults. In general, process measures based on the change history are more useful in predicting fault rates than product metrics of the code: For instance, the number of times code has been changed is a better indication of how many faults it will contain than is its length. We also compare the fault rates of code of various ages, finding that if a module is, on the average, a year older than an otherwise similar module, the older module will have roughly a third fewer faults. Our most successful model measures the fault potential of a module as the sum of contributions from all of the times the module has been changed, with large, recent changes receiving the most weight.
They posit that data based on change history is more useful in predicting fault rates than metrics based on the code, such as size REF .
1209510
Predicting fault incidence using software change history
{ "venue": "IEEE Transactions on Software Engineering", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
In this work we address the task of semantic image segmentation with Deep Learning and make three main contributions that are experimentally shown to have substantial practical merit. First, we highlight convolution with upsampled filters, or 'atrous convolution', as a powerful tool in dense prediction tasks. Atrous convolution allows us to explicitly control the resolution at which feature responses are computed within Deep Convolutional Neural Networks. It also allows us to effectively enlarge the field of view of filters to incorporate larger context without increasing the number of parameters or the amount of computation. Second, we propose atrous spatial pyramid pooling (ASPP) to robustly segment objects at multiple scales. ASPP probes an incoming convolutional feature layer with filters at multiple sampling rates and effective fields-of-views, thus capturing objects as well as image context at multiple scales. Third, we improve the localization of object boundaries by combining methods from DCNNs and probabilistic graphical models. The commonly deployed combination of max-pooling and downsampling in DCNNs achieves invariance but has a toll on localization accuracy. We overcome this by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF), which is shown both qualitatively and quantitatively to improve localization performance. Our proposed "DeepLab" system sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 79.7 percent mIOU in the test set, and advances the results on three other datasets: PASCAL-Context, PASCAL-Person-Part, and Cityscapes. All of our code is made publicly available online.
Some versions of DeepLab REF use fully connected conditional random fields (CRF) in addition to the last layer CNN features in order to improve the localization performance.
3429309
DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs
{ "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "mag_field_of_study": [ "Computer Science", "Medicine" ] }
We present a machine learning-based approach to lossy image compression which outperforms all existing codecs, while running in real-time. Our algorithm typically produces files 2.5 times smaller than JPEG and JPEG 2000, 2 times smaller than WebP, and 1.7 times smaller than BPG on datasets of generic images across all quality levels. At the same time, our codec is designed to be lightweight and deployable: for example, it can encode or decode the Kodak dataset in around 10ms per image on GPU. Our architecture is an autoencoder featuring pyramidal analysis, an adaptive coding module, and regularization of the expected codelength. We also supplement our approach with adversarial training specialized towards use in a compression setting: this enables us to produce visually pleasing reconstructions for very low bitrates.
REF also employs an autoencoder architecture and explores the possibility of using adversarial training within a compression setting.
8291598
Real-Time Adaptive Image Compression
{ "venue": "ArXiv", "journal": "ArXiv", "mag_field_of_study": [ "Computer Science", "Mathematics" ] }
Abstract. This paper addresses a new problem, that of multiscale activity recognition. Our goal is to detect and localize a wide range of activities, including individual actions and group activities, which may simultaneously co-occur in highresolution video. The video resolution allows for digital zoom-in (or zoom-out) for examining fine details (or coarser scales), as needed for recognition. The key challenge is how to avoid running a multitude of detectors at all spatiotemporal scales, and yet arrive at a holistically consistent video interpretation. To this end, we use a three-layered AND-OR graph to jointly model group activities, individual actions, and participating objects. The AND-OR graph allows a principled formulation of efficient, cost-sensitive inference via an explore-exploit strategy. Our inference optimally schedules the following computational processes: 1) direct application of activity detectors -called α process; 2) bottom-up inference based on detecting activity parts -called β process; and 3) top-down inference based on detecting activity context -called γ process. The scheduling iteratively maximizes the log-posteriors of the resulting parse graphs. For evaluation, we have compiled and benchmarked a new dataset of high-resolution videos of group and individual activities co-occurring in a courtyard of the UCLA campus.
Amer et al. REF propose an explore-exploit strategy that schedules processes of top-down inference using activity context and bottom-up inference using activity parts.
7337379
Cost-sensitive top-down/bottom-up inference for multiscale activity recognition
{ "venue": "in European Conference on Computer Vision", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
Probabilistic Latent Semantic Analysis is a novel statistical technique for the analysis of two-mode and co-occurrence data, which has applications in information retrieval and filtering, natural language processing, ma chine learning from text, and in related ar eas. Compared to standard Latent Semantic Analysis which stems from linear algebra and performs a Singular Value Decomposition of co-occurrence tables, the proposed method is based on a mixture decomposition derived from a latent class model. This results. in a more principled approach which has a solid foundation in statistics. In order to avoid overfitting, we propose a widely applicable generalization of maximum likelihood model fitting by tempered EM. Our approach yields substantial and consistent improvements over Latent Semantic Analysis in a number of ex periments.
The improved version of standard latent semantic analysis model has been put forth in REF .
653762
Probabilistic Latent Semantic Analysis
{ "venue": "ArXiv", "journal": "ArXiv", "mag_field_of_study": [ "Mathematics", "Computer Science" ] }
Code cloning is not only assumed to inflate maintenance costs but also considered defect-prone as inconsistent changes to code duplicates can lead to unexpected behavior. Consequently, the identification of duplicated code, clone detection, has been a very active area of research in recent years. Up to now, however, no substantial investigation of the consequences of code cloning on program correctness has been carried out. To remedy this shortcoming, this paper presents the results of a large-scale case study that was undertaken to find out if inconsistent changes to cloned code can indicate faults. For the analyzed commercial and open source systems we not only found that inconsistent changes to clones are very frequent but also identified a significant number of faults induced by such changes. The clone detection tool used in the case study implements a novel algorithm for the detection of inconsistent clones. It is available as open source to enable other researchers to use it as basis for further investigations.
Juergens et al. REF find, after manual inspection of clones in four industrial and one open source system, that inconsistent changes to clones are very frequent and can lead to significant numbers of faults in software.
6196921
Do Code Clones Matter?
{ "venue": "Proc. 31st International Conference on Software Engineering (ICSE '09). IEEE, 2009", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
Predicting the occurrence of links is a fundamental problem in networks. In the link prediction problem we are given a snapshot of a network and would like to infer which interactions among existing members are likely to occur in the near future or which existing interactions are we missing. Although this problem has been extensively studied, the challenge of how to effectively combine the information from the network structure with rich node and edge attribute data remains largely open. We develop an algorithm based on Supervised Random Walks that naturally combines the information from the network structure with node and edge level attributes. We achieve this by using these attributes to guide a random walk on the graph. We formulate a supervised learning task where the goal is to learn a function that assigns strengths to edges in the network such that a random walker is more likely to visit the nodes to which new links will be created in the future. We develop an efficient training algorithm to directly learn the edge strength estimation function. Our experiments on the Facebook social graph and large collaboration networks show that our approach outperforms state-of-theart unsupervised approaches as well as approaches that are based on feature extraction.
REF propose a supervised random walk algorithm to estimate the strength of social links.
7851677
Supervised random walks: predicting and recommending links in social networks
{ "venue": "WSDM '11", "journal": null, "mag_field_of_study": [ "Computer Science", "Physics", "Mathematics" ] }
We resolve several fundamental questions in the area of distributed functional monitoring, initiated by Cormode, Muthukrishnan, and Yi (SODA, 2008), and receiving recent attention. In this model there are k sites each tracking their input streams and communicating with a central coordinator. The coordinator's task is to continuously maintain an approximate output to a function computed over the union of the k streams. The goal is to minimize the number of bits communicated. Let the p-th frequency moment be defined as Fp = i f p i , where fi is the frequency of element i. We show the randomized communication complexity of estimating the number of distinct elements (that is, F0) up to a 1 + ε factor is Ω(k/ε 2 ), improving upon the previous Ω(k + 1/ε 2 ) bound and matching known upper bounds. For Fp, p > 1, we improve the previous Ω(k + 1/ε 2 ) communication bound toΩ(k p−1 /ε 2 ). We obtain similar improvements for heavy hitters, empirical entropy, and other problems. Our lower bounds are the first of any kind in distributed functional monitoring to depend on the product of k and 1/ε 2 . Moreover, the lower bounds are for the static version of the distributed functional monitoring model where the coordinator only needs to compute the function at the time when all k input streams end; surprisingly they almost match what is achievable in the (dynamic version of) distributed functional monitoring model where the coordinator needs to keep track of the function continuously at any time step. We also show that we can estimate Fp, for any p > 1, usingÕ(k p−1 poly(ε −1 )) communication. This drastically improves upon the previousÕ(k 2p+1 N 1−2/p poly(ε −1 )) bound of Cormode, Muthukrishnan, and Yi for general p, and theirÕ(k 2 /ε+ k 1.5 /ε 3 ) bound for p = 2. For p = 2, our bound resolves their main open question. Our lower bounds are based on new direct sum theorems for approximate majority, and yield improvements to classical problems in the standard data stream model. First, we improve the known lower bound for estimating Fp, p > 2, in t passes from Ω(n 1−2/p /(ε 2/p t)) toΩ(n 1−2/p /(ε 4/p t)), giving the first bound * MADALGO is the Center for Massive Data Algorithmics -a Center of the Danish National Research Foundation. that matches what we expect when p = 2 for any constant number of passes. Second, we give the first lower bound for estimating F0 in t passes with Ω(1/(ε 2 t)) bits of space that does not use the hardness of the gap-hamming problem.
Perhaps the most directly related result to our upper bound for for F p estimation, p ∈ (1, 2], is in the distributed functional monitoring model, where Woodruff and Zhang REF show a O(m p−1 poly(log(n), 1/ǫ) + mǫ −1 log(n) log(log(n)/ǫ)) 5 total communication upper bound.
849480
Tight bounds for distributed functional monitoring
{ "venue": "STOC '12", "journal": null, "mag_field_of_study": [ "Mathematics", "Computer Science" ] }
The importance of service contracts providing a suitably synthetic description of software services is widely accepted. While different types of information -ranging from extrafunctional properties to ontological annotations to behavioural descriptions -have been proposed to be included in service contracts, no widely accepted de facto standard has yet emerged for describing service contracts, except for signature information. The lack of a de facto standard is inhibiting large scale deployment of techniques and tools supporting enhanced discovery and composition of services. In this paper we discuss the potentially huge advantages of exploiting behavioural information for service discovery and composition, and relate them to the cost of generating such information and to the needed trade-off between expressiveness and cost and value of analysing such information. On such ground, we also discuss the potential suitability of some well-known modelling approaches to become the de facto standard to represent service behaviour in contracts, also in view of contextual factors (such as required know-how and current employment).
Brogi REF for instance discusses the requirement of service behavior representation in web services and the potential advantages of exploiting this behavioral information for service discovery and composition.
35532395
On the potential advantages of exploiting behavioural information for contract-based service discovery and composition
{ "venue": "J. Log. Algebr. Program.", "journal": "J. Log. Algebr. Program.", "mag_field_of_study": [ "Computer Science" ] }
icture yourself as a fashion designer needing images of fabrics with a particular mixture of colors, a museum cataloger looking P for artifacts of a particular shape and textured pattern, or a movie producer needing a video clip of a red car-like object moving from right to left with the camera zooming. How do you find these images? Even though today's technology enables us to acquire, manipulate, transmit, and store vast on-line image and video collections, the search methodologies used to find pictorial information are still limited due to difficult research problems (see "Semantic versus nonsemantic" sidebar). Typically, these methodologies depend on file IDS, keywords, or text associated with the images. And, although powerful, they don't allow queries based directly on the visual properties of the images, are dependent on the particular vocabulary used, and don't provide queries for images similar to a given image. Research on ways to extend and improve query methods for image databases is widespread, and results have been presented in workshops, conferences,'.* and surveys. We have developed the QBIC (Query by Image Content) system to explore content-based retrieval methods. QBIC allows queries on large image and video databases based on example images, user-constructed sketches and drawings, selected color and texture patterns, At first glance, content-based querying appears deceptively simple because we humans seem to be so good at it. If a program can be written to extract semantically relevant text phrases from images, the problem may be solved by using currently available text-search technology. Unfortunately, in an unconstrained environment, the task of writing this program is beyond the reach of current technology in image understanding. At an artificial intelligence conference several years ago, a challenge was issued to the audience to write a program that would identify all the dogs pictured in a children's book, a task most 3-year-olds can easily accomplish. Nobody in the audience accepted the challenge, and this remains an open problem. Perceptual organization-the process of grouping image features into meaningful objects and attaching semantic descriptions t o scenes through model matching-is an unsolved problem in image understanding. Humans are much better than computers at extracting semantic descriptions from pictures. Computers, however, are better than humans at measuring properties and retaining these in long-term memory. One of the guiding principles used by QBIC is t o let computers do what they do best-quantifiable measurementand let humans do what they do best-attaching semantic meaning. QBIC can find "fish-shaped objects," since shape is a measurable property that can be extracted. However, since fish occur in many shapes, the only fish that will be found will have a shape close t o the drawn shape. This is not the same as the much harder semantical query of finding all the pictures of fish in a pictorial database. 0018-9162/95/$4 00 1995 IEEE
In the commercial domain, IBM QBIC REF is one of the earliest systems.
110716
Query by image and video content: the QBIC system
{ "venue": "IEEE Computer", "journal": "IEEE Computer", "mag_field_of_study": [ "Computer Science" ] }
Abstract. Capacity control in perceptron decision trees is typically performed by controlling their size. We prove that other quantities can be as relevant to reduce their flexibility and combat overfitting. In particular, we provide an upper bound on the generalization error which depends both on the size of the tree and on the margin of the decision nodes. So enlarging the margin in perceptron decision trees will reduce the upper bound on generalization error. Based on this analysis, we introduce three new algorithms, which can induce large margin perceptron decision trees. To assess the effect of the large margin bias, OC1 (Journal of Artificial Intelligence Research, 1994, 2, 1-32.) of Murthy, Kasif, and Salzberg, a well-known system for inducing perceptron decision trees, is used as the baseline algorithm. An extensive experimental study on real world data showed that all three new algorithms perform better or at least not significantly worse than OC1 on almost every dataset with only one exception. OC1 performed worse than the best margin-based method on every dataset.
Bennett et al. REF showed that maximizing margins in perceptron decision trees can be useful to combat overfitting.
8519830
Enlarging the Margins in Perceptron Decision Trees
{ "venue": "Machine Learning", "journal": "Machine Learning", "mag_field_of_study": [ "Computer Science" ] }
Identity-based encryption (IBE) is an exciting alternative to public-key encryption, as IBE eliminates the need for a Public Key Infrastructure (PKI). The senders using an IBE do not need to look up the public keys and the corresponding certificates of the receivers, the identities (e.g. emails or IP addresses) of the latter are sufficient to encrypt. Any setting, PKI-or identity-based, must provide a means to revoke users from the system. Efficient revocation is a well-studied problem in the traditional PKI setting. However in the setting of IBE, there has been little work on studying the revocation mechanisms. The most practical solution requires the senders to also use time periods when encrypting, and all the receivers (regardless of whether their keys have been compromised or not) to update their private keys regularly by contacting the trusted authority. We note that this solution does not scale well -as the number of users increases, the work on key updates becomes a bottleneck. We propose an IBE scheme that significantly improves key-update efficiency on the side of the trusted party (from linear to logarithmic in the number of users), while staying efficient for the users. Our scheme builds on the ideas of the Fuzzy IBE primitive and binary tree data structure, and is provably secure.
In 2008, Boldyreva et al. REF proposed an identity-based encryption scheme that supports efficient revocation operation.
1473455
Identity-based encryption with efficient revocation
{ "venue": "ACM Conference on Computer and Communications Security", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
Recent methods for learning vector space representations of words have succeeded in capturing fine-grained semantic and syntactic regularities using vector arithmetic, but the origin of these regularities has remained opaque. We analyze and make explicit the model properties needed for such regularities to emerge in word vectors. The result is a new global logbilinear regression model that combines the advantages of the two major model families in the literature: global matrix factorization and local context window methods. Our model efficiently leverages statistical information by training only on the nonzero elements in a word-word cooccurrence matrix, rather than on the entire sparse matrix or on individual context windows in a large corpus. The model produces a vector space with meaningful substructure, as evidenced by its performance of 75% on a recent word analogy task. It also outperforms related models on similarity tasks and named entity recognition.
Global Vectors (GloVe) REF combine both global matrix factorization and local context window methods, by training word vectors on co-occurence matrix, so that their differences predict co-occurence ratios.
1957433
Glove: Global Vectors for Word Representation
{ "venue": "EMNLP", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
Abstract. We study the tradeoffs involved in the energy-efficient localization and tracking of mobile targets by a wireless sensor network. Our work focuses on building a framework for evaluating the fundamental performance of tracking strategies in which only a small portion of the network is activated at any point in time. We first compare naive network operation with random activation and selective activation. In these strategies the gains in energy-savings come at the expense of increased uncertainty in the location of the target, resulting in reduced quality of tracking. We show that selective activation with a good prediction algorithm is a dominating strategy that can yield orders-of-magnitude energy savings with negligible difference in tracking quality. We then consider duty-cycled activation and show that it offers a flexible and dynamic tradeoff between energy expenditure and tracking error when used in conjunction with selective activation.
Other strategies like activation, randomized activation, and selective activation, as described in REF , are all focusing on trajectory prediction.
1827831
Energy-Quality Tradeoffs for Target Tracking in Wireless Sensor Networks
{ "venue": "in International Symposium on Aerospace/Defense sensing Simulation and Controls, Aerosense", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
In this paper, we present a data augmentation method that generates synthetic medical images using Generative Adversarial Networks (GANs). We propose a training scheme that first uses classical data augmentation to enlarge the training set and then further enlarges the data size and its diversity by applying GAN techniques for synthetic data augmentation. Our method is demonstrated on a limited dataset of computed tomography (CT) images of 182 liver lesions (53 cysts, 64 metastases and 65 hemangiomas). The classification performance using only classic data augmentation yielded 78.6% sensitivity and 88.4% specificity. By adding the synthetic data augmentation the results significantly increased to 85.7% sensitivity and 92.4% specificity.
For medical images, a generative solution REF was proposed for the classification of liver lesion.
28111473
Synthetic data augmentation using GAN for improved liver lesion classification
{ "venue": "2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018)", "journal": "2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018)", "mag_field_of_study": [ "Computer Science" ] }
Incorporating users' personality traits has shown to be instrumental in many personalized retrieval and recommender systems. Analysis of users' digital traces has become an important resource for inferring personality traits. To date, the analysis of users' explicit and latent characteristics is typically restricted to a single social networking site (SNS). In this work, we propose a novel method that integrates text, image, and users' meta features from two different SNSs: Twitter and Instagram. Our preliminary results indicate that the joint analysis of users' simultaneous activities in two popular SNSs seems to lead to a consistent decrease of the prediction errors for each personality trait.
The above-mentioned efforts were made on a single social networking site, however, Skowron et al REF carried out PR experiments based on text, image, and users' meta data collected from two popular social networking sites, i.e., Twitter and Instagram, and reported that such joint analysis contributed to the decrease of the prediction error.
19385987
Fusing Social Media Cues: Personality Prediction from Twitter and Instagram
{ "venue": "WWW", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
Representing images and videos with Symmetric Positive Definite (SPD) matrices, and considering the Riemannian geometry of the resulting space, has been shown to yield high discriminative power in many visual recognition tasks. Unfortunately, computation on the Riemannian manifold of SPD matrices -especially of high-dimensional ones-comes at a high cost that limits the applicability of existing techniques. In this paper, we introduce algorithms able to handle high-dimensional SPD matrices by constructing a lower-dimensional SPD manifold. To this end, we propose to model the mapping from the high-dimensional SPD manifold to the low-dimensional one with an orthonormal projection. This lets us formulate dimensionality reduction as the problem of finding a projection that yields a low-dimensional manifold either with maximum discriminative power in the supervised scenario, or with maximum variance of the data in the unsupervised one. We show that learning can be expressed as an optimization problem on a Grassmann manifold and discuss fast solutions for special cases. Our evaluation on several classification tasks evidences that our approach leads to a significant accuracy gain over state-of-the-art methods.
Harandi et al. REF produce a lower-dimensional SPD manifold with an orthogonal mapping obtained by devising a discriminative metric learning framework with respect to the original highdimensional data.
460342
Dimensionality Reduction on SPD Manifolds: The Emergence of Geometry-Aware Methods
{ "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "mag_field_of_study": [ "Computer Science", "Medicine" ] }
Outsourced decryption ABE system largely reduces the computation cost for users who intend to access the encrypted files stored in cloud. However, the correctness of the transformation ciphertext cannot be guaranteed because the user does not have the original ciphertext. Lai et al. provided an ABE scheme with verifiable outsourced decryption which helps the user to check whether the transformation done by the cloud is correct. In order to improve the computation performance and reduce communication overhead, we propose a new verifiable outsourcing scheme with constant ciphertext length. To be specific, our scheme achieves the following goals. (1) Our scheme is verifiable which ensures that the user efficiently checks whether the transformation is done correctly by the CSP. (2) The size of ciphertext and the number of expensive pairing operations are constant, which do not grow with the complexity of the access structure. (3) The access structure in our scheme is AND gates on multivalued attributes and we prove our scheme is verifiable and it is secure against selectively chosen-plaintext attack in the standard model. (4) We give some performance analysis which indicates that our scheme is adaptable for various limited bandwidth and computation-constrained devices, such as mobile phone.
In REF , Li et al. combined the verifiable outsourced decryption technique with the ABE scheme which possesses the property of constant ciphertext length.
3310041
Verifiable Outsourced Decryption of Attribute-Based Encryption with Constant Ciphertext Length
{ "venue": "Security and Communication Networks", "journal": "Security and Communication Networks", "mag_field_of_study": [ "Computer Science" ] }
Abstract-Due to the rapid advancement in the wireless communication technology and automotive industries, the paradigm of vehicular ad-hoc networks (VANETs) emerges as a promising approach to provide road safety, vehicle traffic management, and infotainment applications. Cooperative communication, on the other hand, can enhance the reliability of communication links in VANETs, thus mitigating wireless channel impairments due to the user mobility. In this paper, we present a cooperative scheme for medium access control (MAC) in VANETs, referred to as Cooperative ADHOC MAC (CAH-MAC). In CAH-MAC, neighboring nodes cooperate by utilizing unreserved time slots, for retransmission of a packet which failed to reach the target receiver due to a poor channel condition. Through mathematical analysis and simulation, we show that our scheme increases the probability of successful packet transmission and hence the network throughput in various networking scenarios.
In CAH-MAC REF , neighboring nodes cooperate by utilizing unreserved time slots, for retransmission of a packet which failed to reach the target receiver due to a poor channel condition.
2840547
CAH-MAC: Cooperative ADHOC MAC for Vehicular Networks
{ "venue": "IEEE Journal on Selected Areas in Communications", "journal": "IEEE Journal on Selected Areas in Communications", "mag_field_of_study": [ "Computer Science" ] }
We introduce the value iteration network (VIN): a fully differentiable neural network with a 'planning module' embedded within. VINs can learn to plan, and are suitable for predicting outcomes that involve planning-based reasoning, such as policies for reinforcement learning. Key to our approach is a novel differentiable approximation of the value-iteration algorithm, which can be represented as a convolutional neural network, and trained end-to-end using standard backpropagation. We evaluate VIN based policies on discrete and continuous path-planning domains, and on a natural-language based search task. We show that by learning an explicit planning computation, VIN policies generalize better to new, unseen domains.
Similarly, value iteration networks embed a planner into a neural network which can learn navigation tasks REF .
11374605
Value Iteration Networks
{ "venue": "Advances in Neural Information Processing Systems 29 pages 2154--2162, 2016", "journal": null, "mag_field_of_study": [ "Computer Science", "Mathematics" ] }
Abstract-We present a new distributed approach that establishes reputation-based trust among sensor nodes in order to identify malfunctioning and malicious sensor nodes and minimize their impact on applications. Our method adapts well to the special characteristics of wireless sensor networks, the most important being their resource limitations. Our methodology computes statistical trust and a confidence interval around the trust based on direct and indirect experiences of sensor node behavior. By considering the trust confidence interval, we are able to study the tradeoff between the tightness of the trust confidence interval with the resources used in collecting experiences. Furthermore, our approach allows dynamic scaling of redundancy levels based on the trust relationship between the nodes of a wireless sensor network. Using extensive simulations we demonstrate the benefits of our approach over an approach that uses static redundancy levels in terms of reduced energy consumption and longer life of the network. We also find that high confidence trust can be computed on each node with a relatively small memory overhead and used to determine the level of redundancy operations among nodes in the system.
Probst and Kasera REF described how to establish trust management among sensor nodes to detect malicious sensor nodes and minimize their influence on applications.
1840301
Statistical trust establishment in wireless sensor networks
{ "venue": "2007 International Conference on Parallel and Distributed Systems", "journal": "2007 International Conference on Parallel and Distributed Systems", "mag_field_of_study": [ "Computer Science" ] }
: Overview of our approach to articulated 3D human pose estimation. Red boxes specify the selected component. Pictorial structure models are the de facto standard for 2D human pose estimation. Numerous refinements and improvements have been proposed such as discriminatively trained body part detectors, flexible body models and local and global mixtures. While these techniques allow to achieve the state-of-the-art performance for 2D pose estimation, they have not yet been extended to enable pose estimation in 3D, instead this problem is traditionally addressed using 3D body models and involves complex inference in a high-dimensional space of 3D body configurations. We formulate the articulated 3D human pose estimation problem as a joint inference over the set of 2D projections of the pose in each of the camera views. As a first contribution of this paper, we propose a 2D pose estimation approach that extends the state-of-the-art 2D pictorial structures model [6] with flexible parts, color features, multi-modal pairwise terms, and mixtures of pictorial structures. The second and main contribution is to extend this 2D pose estimation model to a multi-view model that performs joint reasoning over people poses seen from multiple viewpoints. The output of this novel model is then used to recover 3D pose. We evaluate our multi-view pictorial structures model on HumanEva-I [8] and MPII Cooking [7] dataset. In comparison to related work for 3D pose estimation our approach achieves similar or better results while operating on single-frames only and not relying on activity specific motion models or tracking. Notably, our approach outperforms state-of-the-art for activities with more complex motions. Single-view model: The pictorial structures model, originally introduced in [2, 3] , represents the human body as a configuration L = {l 1 , . . . , l N } of N rigid parts and a set of pairwise part relationships E. The image position and absolute orientation of each part is given by l i = (x i , y i , θ i ). We formulate the model as a conditional random field, and assume that the probability of the part configuration L given the image evidence I factorizes into a product of unary and pairwise terms: The part likelihood terms f n (l n ; I) are represented with boosted part detectors that rely on the encoding of the image using a densely computed grid of shape context descriptors [1] . We concatenate these shape context features with color features and learn a boosted part detector on top of this combined representation. Note that augumenting shape information with the color allows us to automatically learn the relative importance of both features at the part detection stage. The pairwise terms f i j (l i , l j ) which encode the spatial constraints between parts are traditionally modeled with Gaussian distribution in the transformed space of the joint between two parts. We extend our model by introducing mixture models at the level of these pairwise part dependencies. To that end we replace the unimodal Gaussian with the term that maximizes over multiple modes and represent each mode with a Gaussian. Following [4, 5] we extend our approach to a mixture of pictorial structures models. We obtain the mixture components by clustering the training data with k-means and learning a separate model for each cluster. The components typically correspond to major modes in the data, such as various viewpoints of the person with respect to the camera. The index of the component is treated as a latent variable to be inferred at test time. We select the best component with the minimal uncertainty in the marginal posterior distributions of the body parts. In our experiments this approach worked slightly better compared to a trained holistic classifier that distinguishes the mixture component based on the contents of the person bounding box. Multi-view model: To exploit the multi-view information we augument the model with appearance and spatial correspondence constraints across views. In order to estimate the 3D pose we proceed in two steps. In the first step we jointly infer the 2D projections of the 3D body joints across views exploiting multi-view constraints. In the second step, we recover the 3D pose by triangulation of the estimated 2D projections. For simplicity, we describe our multi-view model for the case of two views. For view m, let us denote the 2D body configuration as L m and image evidence as I m . According to Eq. 1 the single-view factors f (L 1 ; I 1 ) and f (L 2 ; I 2 ) representing the conditional posterior over body configurations decomposes into a product of unary and pairwise terms that define appearance and spatial constraints between parts independently for each view. The joint posterior over configurations in both views is given by p(L 1 , L 2 |I 1 ,
Some approaches to tackle this problem already exist: REF proposes a multi-view pictorial structure algorithm leading to a 3D model of the person.
8474682
Multi-view Pictorial Structures for 3D Human Pose Estimation
{ "venue": "BMVC", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
This paper addresses the following three topics: positive semidefinite (psd) matrix completions, universal rigidity of frameworks, and the Strong Arnold Property (SAP). We show some strong connections among these topics, using semidefinite programming as unifying theme. Our main contribution is a sufficient condition for constructing partial psd matrices which admit a unique completion to a full psd matrix. Such partial matrices are an essential tool in the study of the Gram dimension gd(G) of a graph G, a recently studied graph parameter related to the low psd matrix completion problem. Additionally, we derive an elementary proof of Connelly's sufficient condition for universal rigidity of tensegrity frameworks and we investigate the links between these two sufficient conditions. We also give a geometric characterization of psd matrices satisfying the Strong Arnold Property in terms of nondegeneracy of an associated semidefinite program, which we use to establish some links between the Gram dimension gd(·) and the Colin de Verdière type graph parameter ν = (·).
Using results from REF we can rephrase our necessary and sufficient condition for universal completability of LEFs in terms of the Strong Arnold Property (cf.
3578508
Positive Semidefinite Matrix Completion, Universal Rigidity and the Strong Arnold Property
{ "venue": null, "journal": "arXiv: Optimization and Control", "mag_field_of_study": [ "Mathematics" ] }
Broadcasting is a common operation in a network to resolve many issues. In a mobile ad hoc network (MANET) in particular, due to host mobility, such operations are expected to be executed more frequently (such as finding a route to a particular host, paging a particular host, and sending an alarm signal). Because radio signals are likely to overlap with others in a geographical area, a straightforward broadcasting by flooding is usually very costly and will result in serious redundancy, contention, and collision, to which we refer as the broadcast storm problem. In this paper, we identify this problem by showing how serious it is through analyses and simulations. We propose several schemes to reduce redundant rebroadcasts and differentiate timing of rebroadcasts to alleviate this problem. Simulation results are presented, which show different levels of improvement over the basic flooding approach.
Flooding approach is costly and can cause a serious problem called broadcast storm problem which was identified in REF .
7914832
The broadcast storm problem in a mobile ad hoc network
{ "venue": "MobiCom '99", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
Sentiment analysis has been widely researched in the domain of online review sites with the aim of generating summarized opinions of users about different aspects of products. However, there has been little work focusing on identifying the polarity of sentiments expressed by users during disaster events. Identifying such sentiments from online social networking sites can help emergency responders understand the dynamics of the network, e.g., the main users' concerns, panics, and the emotional impacts of interactions among members. In this paper, we perform a sentiment analysis of tweets posted on Twitter during the disastrous Hurricane Sandy and visualize online users' sentiments on a geographical map centered around the hurricane. We show how users' sentiments change according not only to their locations, but also based on the distance from the disaster. In addition, we study how the divergence of sentiments in a tweet posted during the hurricane affects the tweet retweetability. We find that extracting sentiments during a disaster may help emergency responders develop stronger situational awareness of the disaster zone itself.
Neppalli et al. REF developed a system for identifying the polarity of tweets during Hurricane Sandy.
42809986
null
null
Abstract-Vehicle-to-vehicle (VTV) wireless communications have many envisioned applications in traffic safety and congestion avoidance, but the development of suitable communications systems and standards requires accurate models for the VTV propagation channel. In this paper, we present a new wideband multiple-input-multiple-output (MIMO) model for VTV channels based on extensive MIMO channel measurements performed at 5.2 GHz in highway and rural environments in Lund, Sweden. The measured channel characteristics, in particular the nonstationarity of the channel statistics, motivate the use of a geometry-based stochastic channel model (GSCM) instead of the classical tapped-delay line model. We introduce generalizations of the generic GSCM approach and techniques for parameterizing it from measurements and find it suitable to distinguish between diffuse and discrete scattering contributions. The time-variant contribution from discrete scatterers is tracked over time and delay using a high resolution algorithm, and our observations motivate their power being modeled as a combination of a (deterministic) distance decay and a slowly varying stochastic process. The paper gives a full parameterization of the channel model and supplies an implementation recipe for simulations. The model is verified by comparison of MIMO antenna correlations derived from the channel model to those obtained directly from the measurements.
Similarly, a wideband multiple-input-multiple-output model REF proposed, which is based on extensive measurements performed in suburban and highway scenario using 5.2 GHz frequency band.
9184950
A geometry-based stochastic MIMO model for vehicle-to-vehicle communications
{ "venue": "IEEE Transactions on Wireless Communications", "journal": "IEEE Transactions on Wireless Communications", "mag_field_of_study": [ "Computer Science", "Mathematics" ] }
We present YOLO, a new approach to object detection. Prior work on object detection repurposes classifiers to perform detection. Instead, we frame object detection as a regression problem to spatially separated bounding boxes and associated class probabilities. A single neural network predicts bounding boxes and class probabilities directly from full images in one evaluation. Since the whole detection pipeline is a single network, it can be optimized end-to-end directly on detection performance. Our unified architecture is extremely fast. Our base YOLO model processes images in real-time at 45 frames per second. A smaller version of the network, Fast YOLO, processes an astounding 155 frames per second while still achieving double the mAP of other real-time detectors. Compared to state-of-the-art detection systems, YOLO makes more localization errors but is less likely to predict false positives on background. Finally, YOLO learns very general representations of objects. It outperforms other detection methods, including DPM and R-CNN, when generalizing from natural images to other domains like artwork.
YOLO REF cleverly formulates object detection as a regression task, leading to very efficient detection systems.
206594738
You Only Look Once: Unified, Real-Time Object Detection
{ "venue": "2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)", "journal": "2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)", "mag_field_of_study": [ "Computer Science" ] }
Collaborative machine learning and related techniques such as federated learning allow multiple participants, each with his own training dataset, to build a joint model by training locally and periodically exchanging model updates. We demonstrate that these updates leak unintended information about participants' training data and develop passive and active inference attacks to exploit this leakage. First, we show that an adversarial participant can infer the presence of exact data points-for example, specific locations-in others' training data (i.e., membership inference). Then, we show how this adversary can infer properties that hold only for a subset of the training data and are independent of the properties that the joint model aims to capture. For example, he can infer when a specific person first appears in the photos used to train a binary gender classifier. We evaluate our attacks on a variety of tasks, datasets, and learning configurations, analyze their limitations, and discuss possible defenses.
Melis et al. REF exploited membership and property inference attacks to uncover features of the clients' training data from model updates.
53099247
Exploiting Unintended Feature Leakage in Collaborative Learning
{ "venue": "2019 IEEE Symposium on Security and Privacy (SP)", "journal": "2019 IEEE Symposium on Security and Privacy (SP)", "mag_field_of_study": [ "Computer Science" ] }
We present CROSSGRAD, a method to use multi-domain training data to learn a classifier that generalizes to new domains. CROSSGRAD does not need an adaptation phase via labeled or unlabeled data, or domain features in the new domain. Most existing domain adaptation methods attempt to erase domain signals using techniques like domain adversarial training. In contrast, CROSSGRAD is free to use domain signals for predicting labels, if it can prevent overfitting on training domains. We conceptualize the task in a Bayesian setting, in which a sampling step is implemented as data augmentation, based on domain-guided perturbations of input instances. CROSSGRAD parallelly trains a label and a domain classifier on examples perturbed by loss gradients of each other's objectives. This enables us to directly perturb inputs, without separating and re-mixing domain signals while making various distributional assumptions. Empirical evaluation on three different applications where this setting is natural establishes that (1) domain-guided perturbation provides consistently better generalization to unseen domains, compared to generic instance perturbation methods, and that (2) data augmentation is a more stable and accurate method than domain adversarial training.
One method, called CROSSGRAD REF , has been recently proposed to infer domain-specific models that generalize to unseen domains by using domain signals without semantic descriptors.
13754527
Generalizing Across Domains via Cross-Gradient Training
{ "venue": "ArXiv", "journal": "ArXiv", "mag_field_of_study": [ "Mathematics", "Computer Science" ] }
The problem of reducing the age-of-information has been extensively studied in single-hop networks. In this paper, we minimize the age-of-information in general multihop networks. If the packet transmission times over the network links are exponentially distributed, we prove that a preemptive Last Generated First Served (LGFS) policy results in smaller age processes at all nodes of the network (in a stochastic ordering sense) than any other causal policy. In addition, for arbitrary distributions of packet transmission times, the non-preemptive LGFS policy is shown to minimize the age processes at all nodes among all non-preemptive work-conserving policies (again in a stochastic ordering sense). It is surprising that such simple policies can achieve optimality of the joint distribution of the age processes at all nodes even under arbitrary network topologies, as well as arbitrary packet generation and arrival times. These optimality results not only hold for the age processes, but also for any non-decreasing functional of the age processes. • We consider a general multihop network where the update packets do not necessarily arrive to the gateway node in the order of their generation times. We prove that, if the packet transmission times over the network links are exponentially distributed, then for arbitrary arrival process, network topology, and buffer sizes, the preemptive LGFS policy minimizes the age processes at all nodes in the
Particularly, Bedewy et al. REF considered the multihop networks with an external source, and proved that among all causal policies, the pre-emptive last generated first-served (LGFS) policy and nonpre-emptive LGFS policy minimized the age processes at all nodes for the exponentially distributed and generally distributed packet transmission times, respectively.
782212
Age-optimal information updates in multihop networks
{ "venue": "2017 IEEE International Symposium on Information Theory (ISIT)", "journal": "2017 IEEE International Symposium on Information Theory (ISIT)", "mag_field_of_study": [ "Computer Science", "Mathematics" ] }
We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers. Unlike recent language representation models (Peters et al., 2018; Radford et al., 2018) , BERT is designed to pre-train deep bidirectional representations by jointly conditioning on both left and right context in all layers. As a result, the pre-trained BERT representations can be fine-tuned with just one additional output layer to create state-of-theart models for a wide range of tasks, such as question answering and language inference, without substantial task-specific architecture modifications. BERT is conceptually simple and empirically powerful. It obtains new state-of-the-art results on eleven natural language processing tasks, including pushing the GLUE benchmark to 80.4% (7.6% absolute improvement), MultiNLI accuracy to 86.7% (5.6% absolute improvement) and the SQuAD v1.1 question answering Test F1 to 93.2 (1.5 absolute improvement), outperforming human performance by 2.0.
Bidirectional Encoder Representations from Transformers (BERT) is a recently proposed model that is pre-trained on a huge dataset and can be fine-tuned for a specific task, including Named Entity Recognition (NER), which outperforms most of the state of the art results in several NLP tasks REF .
52967399
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
{ "venue": "ArXiv", "journal": "ArXiv", "mag_field_of_study": [ "Computer Science" ] }
Our goal in this paper is to discover near duplicate patterns in large collections of artworks. This is harder than standard instance mining due to differences in the artistic media (oil, pastel, drawing, etc), and imperfections inherent in the copying process. Our key technical insight is to adapt a standard deep feature to this task by fine-tuning it on the specific art collection using self-supervised learning. More specifically, spatial consistency between neighbouring feature matches is used as supervisory fine-tuning signal. The adapted feature leads to more accurate style-invariant matching, and can be used with a standard discovery approach, based on geometric verification, to identify duplicate patterns in the dataset. The approach is evaluated on several different datasets and shows surprisingly good qualitative discovery results. For quantitative evaluation of the method, we annotated 273 near duplicate details in a dataset of 1587 artworks attributed to Jan Brueghel and his workshop 1 . Beyond artworks, we also demonstrate improvement on localization on the Oxford5K photo dataset as well as on historical photograph localization on the Large Time Lags Location (LTLL) dataset.
Recently, REF learned deep mid-level features for matching across different visual media (drawings, oil paintings, frescoes, sketches, etc), and used them together with spatial verification to discover copied details in a dataset of thousands of artworks.
71145754
Discovering Visual Patterns in Art Collections With Spatially-Consistent Feature Learning
{ "venue": "2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)", "journal": "2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)", "mag_field_of_study": [ "Computer Science" ] }
Much of the success of the Internet services model can be attributed to the popularity of a class of workloads that we call Online Data-Intensive (OLDI) services. These workloads perform significant computing over massive data sets per user request but, unlike their offline counterparts (such as MapReduce computations), they require responsiveness in the sub-second time scale at high request rates. Large search products, online advertising, and machine translation are examples of workloads in this class. Although the load in OLDI services can vary widely during the day, their energy consumption sees little variance due to the lack of energy proportionality of the underlying machinery. The scale and latency sensitivity of OLDI workloads also make them a challenging target for power management techniques. We investigate what, if anything, can be done to make OLDI systems more energy-proportional. Specifically, we evaluate the applicability of active and idle low-power modes to reduce the power consumed by the primary server components (processor, memory, and disk), while maintaining tight response time constraints, particularly on 95th-percentile latency. Using Web search as a representative example of this workload class, we first characterize a production Web search workload at cluster-wide scale. We provide a finegrain characterization and expose the opportunity for power savings using low-power modes of each primary server component. Second, we develop and validate a performance model to evaluate the impact of processor-and memory-based lowpower modes on the search latency distribution and consider the benefit of current and foreseeable low-power modes. Our results highlight the challenges of power management for this class of workloads. In contrast to other server workloads, for which idle low-power modes have shown great promise, for OLDI workloads we find that energy-proportionality with acceptable query latency can only be achieved using coordinated, full-system active low-power modes.
REF highlight unique challenges for low-latency workloads and advocate full system active low-power modes.
13789749
Power management of online data-intensive services
{ "venue": "2011 38th Annual International Symposium on Computer Architecture (ISCA)", "journal": "2011 38th Annual International Symposium on Computer Architecture (ISCA)", "mag_field_of_study": [ "Computer Science" ] }
Systems that extract structured information from natural language passages have been highly successful in specialized domains. The time is opportune for developing analogous applications for molecular biology and genomics. We present a system, GENIES, that extracts and structures information about cellular pathways from the biological literature in accordance with a knowledge model that we developed earlier. We implemented GENIES by modifying an existing medical natural language processing system, MedLEE, and performed a preliminary evaluation study. Our results demonstrate the value of the underlying techniques for the purpose of acquiring valuable knowledge from biological journals. Contact:
Friedman et al. REF have developed a similar system to extract structure information about cellular pathways using a knowledge model.
1554697
GENIES : a natural-language processing system for the extraction of molecular pathways from journal articles
{ "venue": "Bioinformatics", "journal": "Bioinformatics", "mag_field_of_study": [ "Computer Science", "Medicine" ] }
Abstract-This paper addresses the problem of maximizing the network lifetime of rechargeable Wireless Sensor Networks (WSNs) whilst ensuring all targets are monitored continuously by at least one sensor node. The objective is to determine a group of sensor nodes, and their wake-up schedule such that within a time interval, one subset of nodes are active whilst others enter the sleep state to conserve energy as well as recharge their battery. We propose a Linear Programming (LP) based solution to determine the activation schedule of sensor nodes whilst affording them recharging opportunities and at the same time ensures complete target coverage. The results show our LP solution achieves more than twice the performance in terms of network lifetime as compared to similar algorithms developed for finite battery WSNs. However, it is computationally expensive. We therefore propose Maximum Utility Algorithm (MUA), a few orders of magnitude faster approach that achieves 3/4 of the network lifetime obtained by our LP solution.
Yang and Chin REF considered the problem of maximizing the network lifetime while ensuring all targets are continuously monitored by at least one sensor.
20193995
Novel Algorithms for Complete Targets Coverage in Energy Harvesting Wireless Sensor Networks
{ "venue": "IEEE Communications Letters", "journal": "IEEE Communications Letters", "mag_field_of_study": [ "Computer Science" ] }
We consider the problem of classifying documents not by topic, but by overall sentiment, e.g., determining whether a review is positive or negative. Using movie reviews as data, we find that standard machine learning techniques definitively outperform human-produced baselines. However, the three machine learning methods we employed (Naive Bayes, maximum entropy classification, and support vector machines) do not perform as well on sentiment classification as on traditional topic-based categorization. We conclude by examining factors that make the sentiment classification problem more challenging. 1
Pang et al. REF employed Naive Bayes, maximum entropy classification, and support vector machines (SVM) methods to solve the sentiment classification problem.
7105713
Thumbs Up? Sentiment Classification Using Machine Learning Techniques
{ "venue": "Conference On Empirical Methods In Natural Language Processing", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
This paper presents an integrated MAC and routing protocol called Delay Guaranteed Routing and MAC (DGRAM) for delay sensitive wireless sensor network (WSN) applications. DGRAM is a TDMA-based protocol designed to provide deterministic delay guarantee in an energy efficient manner. The design is based on slot reuse to reduce the latency of a node in accessing the medium, while ensuring contention free medium access. The transmission and reception cycles of nodes are carefully computed so that data is transported from the source towards the sink while the nodes could sleep at the other times to conserve energy. Thus, routes of data packets are integrated into DGRAM. We provide a detailed design of time slot assignment and delay analysis of the protocol. One major advantage of DGRAM over other TDMA protocols is that the slot assignment is done in a fully distributed manner making the DGRAM network self-configuring. We have simulated DGRAM using ns2 simulator and compared the results with those of SMAC for a similar network. Simulation results show that the delay experienced by data packets is always less than the analytical delay bound for which the protocol is designed. As per simulation results, the average energy consumption does not change as the event rate changes, and is less than that of SMAC. This characteristic of DGRAM provides flexibility in choosing various operating parameters without having to worry about energy efficiency.
A delay guaranteed routing and MC protocol (DEGRAM) REF proposes a joint-duty cycled MAC and routing protocol, which is based on contention free TDMA.
62105372
DGRAM: A Delay Guaranteed Routing and MAC protocol for wireless sensor networks
{ "venue": "2008 International Symposium on a World of Wireless, Mobile and Multimedia Networks", "journal": "2008 International Symposium on a World of Wireless, Mobile and Multimedia Networks", "mag_field_of_study": [ "Computer Science" ] }
Tapping into the wisdom of the crowd, social tagging can be considered an alternative mechanism-as opposed to Web search-for organizing and discovering information on the Web. Effective tag-based recommendation of information items, such as Web resources, is a critical aspect of this social information discovery mechanism. A precise understanding of the information structure of social tagging systems lies at the core of an effective tag-based recommendation method. While most of the existing research either implicitly or explicitly assumes a simple tripartite graph structure for this purpose, we propose a comprehensive information structure to capture all types of co-occurrence information in the tagging data. Based on the proposed information structure, we further propose a unified user profiling scheme to make full use of all available information. Finally, supported by our proposed user profile, we propose a novel framework for collaborative filtering in social tagging systems. In our proposed framework, we first generate joint item-tag recommendations, with tags indicating topical interests of users in target items. These joint recommendations are then refined by the wisdom from the crowd and projected to the item space for final item recommendations. Evaluation using three real-world datasets shows that our proposed recommendation approach significantly outperformed state-of-the-art approaches.
A method highly relevant to ours is the joint item-tag recommendation approach REF , which first makes joint item-tag recommendations and then projects them to the item space for final item recommendation.
8095096
Collaborative filtering in social tagging systems based on joint item-tag recommendations
{ "venue": "CIKM '10", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
A high number of Internet shops makes it difficult for a customer to review manually all the available offers and select optimal outlets for shopping. A partial solution to the problem is brought by price comparators which produce price rankings from collected offers. However, their possibilities are limited to a comparison of offers for a single product requested by the customer. The issue we investigate in this paper is a multiple-item multiple-shop optimization problem, in which total expenses of a customer to buy a given set of items should be minimized over all available offers. In this paper, the Internet Shopping Optimization Problem (ISOP) is defined in a formal way and a proof of its strong NP-hardness is provided. We also describe polynomial time algorithms for special cases of the problem.
Moreover, it is proven that the problem is not approximable in polynomial time REF .
3216221
Internet shopping optimization problem
{ "venue": "Int. J. Appl. Math. Comput. Sci.", "journal": "Int. J. Appl. Math. Comput. Sci.", "mag_field_of_study": [ "Computer Science" ] }
Abstract: In this paper, we propose mathematical optimization models of household energy units to optimally control the major residential energy loads while preserving the user preferences. User comfort is modelled in a simple way, which considers appliance class, user preferences and weather conditions. The wind-driven optimization (WDO) algorithm with the objective function of comfort maximization along with minimum electricity cost is defined and implemented. On the other hand, for maximum electricity bill and peak reduction, min-max regret-based knapsack problem (K-WDO) algorithm is used. To validate the effectiveness of the proposed algorithms, extensive simulations are conducted for several scenarios. The simulations show that the proposed algorithms provide with the best optimal results with a fast convergence rate, as compared to the existing techniques. Appl. Sci. 2015Sci. , 5 1135
In REF , the authors propose a technique for controlling the residential energy loads while maximizing UC and minimizing the electricity bill.
17816736
An Efficient Power Scheduling Scheme for Residential Load Management in Smart Homes
{ "venue": null, "journal": "Applied Sciences", "mag_field_of_study": [ "Engineering" ] }
Abstract-An ambient RF energy harvesting sensor node with onboard sensing and communication functionality was developed and tested. The minimal RF input power required for sensor node operation was -18 dBm (15.8 µW). Using a 6 dBi receive antenna, the most sensitive RF harvester was shown to operate at a distance of 10.4 km from a 1 MW UHF television broadcast transmitter, and over 200 m from a cellular base transceiver station. A complete ambient RF-powered prototype was constructed which measured temperature and light level and wirelessly transmitted these measurements.
Parks et al. REF demonstrated a sensor node harvesting ambient RF energy from both digital TV and cellular radio waves that operates at a distance of 10.4 km from a 1 MW UHF television broadcast tower, and over 200 m from a cellular base transceiver station.
47441324
A wireless sensing platform utilizing ambient RF energy
{ "venue": "2013 IEEE Topical Conference on Biomedical Wireless Technologies, Networks, and Sensing Systems", "journal": "2013 IEEE Topical Conference on Biomedical Wireless Technologies, Networks, and Sensing Systems", "mag_field_of_study": [ "Computer Science" ] }
The computation for today's intelligent personal assistants such as Apple Siri, Google Now, and Microsoft Cortana, is performed in the cloud. This cloud-only approach requires significant amounts of data to be sent to the cloud over the wireless network and puts significant computational pressure on the datacenter. However, as the computational resources in mobile devices become more powerful and energy efficient, questions arise as to whether this cloud-only processing is desirable moving forward, and what are the implications of pushing some or all of this compute to the mobile devices on the edge. In this paper, we examine the status quo approach of cloud-only processing and investigate computation partitioning strategies that effectively leverage both the cycles in the cloud and on the mobile device to achieve low latency, low energy consumption, and high datacenter throughput for this class of intelligent applications. Our study uses 8 intelligent applications spanning computer vision, speech, and natural language domains, all employing state-of-the-art Deep Neural Networks (DNNs) as the core machine learning technique. We find that given the characteristics of DNN algorithms, a fine-grained, layer-level computation partitioning strategy based on the data and computation variations of each layer within a DNN has significant latency and energy advantages over the status quo approach. Using this insight, we design Neurosurgeon, a lightweight scheduler to automatically partition DNN computation between mobile devices and datacenters at the granularity of neural network layers. Neurosurgeon does not require per-application profiling. It adapts to various DNN architectures, hardware platforms, wireless networks, and server load levels, intelligently partitioning computation for Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and /or a fee. Request permissions from permissions@acm.org. best latency or best mobile energy. We evaluate Neurosurgeon on a state-of-the-art mobile development platform and show that it improves end-to-end latency by 3.1× on average and up to 40.7×, reduces mobile energy consumption by 59.5% on average and up to 94.7%, and improves datacenter throughput by 1.5× on average and up to 6.7×.
Neurosurgeon REF identifies when it is beneficial to offload a DNN layer to be computed on the cloud.
1158124
Neurosurgeon: Collaborative Intelligence Between the Cloud and Mobile Edge
{ "venue": "ASPLOS '17", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
In this work we address the task of semantic image segmentation with Deep Learning and make three main contributions that are experimentally shown to have substantial practical merit. First, we highlight convolution with upsampled filters, or 'atrous convolution', as a powerful tool in dense prediction tasks. Atrous convolution allows us to explicitly control the resolution at which feature responses are computed within Deep Convolutional Neural Networks. It also allows us to effectively enlarge the field of view of filters to incorporate larger context without increasing the number of parameters or the amount of computation. Second, we propose atrous spatial pyramid pooling (ASPP) to robustly segment objects at multiple scales. ASPP probes an incoming convolutional feature layer with filters at multiple sampling rates and effective fields-of-views, thus capturing objects as well as image context at multiple scales. Third, we improve the localization of object boundaries by combining methods from DCNNs and probabilistic graphical models. The commonly deployed combination of max-pooling and downsampling in DCNNs achieves invariance but has a toll on localization accuracy. We overcome this by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF), which is shown both qualitatively and quantitatively to improve localization performance. Our proposed "DeepLab" system sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 79.7 percent mIOU in the test set, and advances the results on three other datasets: PASCAL-Context, PASCAL-Person-Part, and Cityscapes. All of our code is made publicly available online.
DeepLab REF introduces convolution with upsampled filters, atrous spatial pyramid pooling and a CRF based post processing to improve the segmentation benchmarks.
3429309
DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs
{ "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "mag_field_of_study": [ "Computer Science", "Medicine" ] }
Previous CNN-based video super-resolution approaches need to align multiple frames to the reference. In this paper, we show that proper frame alignment and motion compensation is crucial for achieving high quality results. We accordingly propose a "sub-pixel motion compensation" (SPMC) layer in a CNN framework. Analysis and experiments show the suitability of this layer in video SR. The final end-to-end, scalable CNN framework effectively incorporates the SPMC layer and fuses multiple frames to reveal image details. Our implementation can generate visually and quantitatively high-quality results, superior to current state-of-the-arts, without the need of parameter tuning.
Tao et al. REF introduced a new sub-pixel motion compensation (SPMC) layer to perform motion compensation and up-sampling jointly.
3193713
Detail-Revealing Deep Video Super-Resolution
{ "venue": "2017 IEEE International Conference on Computer Vision (ICCV)", "journal": "2017 IEEE International Conference on Computer Vision (ICCV)", "mag_field_of_study": [ "Computer Science" ] }
Abstract Recently, a method for removing shadows from colour images was developed (Finlayson et al. in IEEE Trans. Pattern Anal. Mach. Intell. 28:59-68, 2006) that relies upon finding a special direction in a 2D chromaticity feature space. This "invariant direction" is that for which particular colour features, when projected into 1D, produce a greyscale image which is approximately invariant to intensity and colour of scene illumination. Thus shadows, which are in essence a particular type of lighting, are greatly attenuated. The main approach to finding this special angle is a camera calibration: a colour target is imaged under many different lights, and the direction that best makes colour patch images equal across illuminants is the invariant direction. Here, we take a different approach. In this work, instead of a camera calibration we aim at finding the invariant direction from evidence in the colour image itself. Specifically, we recognize that producing a 1D projection in the correct invariant direction will result in a 1D distribution of pixel values that have smaller entropy than projecting in the wrong direction. The reason is that the correct projection results in a probability distribution spike, for pixels all the same except differing by the lighting that produced their observed RGB values and therefore lying along a line with orientation equal to the invariant direction. Hence we seek that projection which produces a type of intrinsic, independent of lighting reflectance-information only image by minimizing entropy, and from there go on to remove shadows as previously. To be able to develop an effective description of the entropy-minimization task, we go over to the quadratic entropy, rather than Shannon's definition. Replacing the observed pixels with a kernel density probability distribution, the quadratic entropy can be written as a very simple formulation, and can be evaluated using the efficient Fast Gauss Transform. The entropy, written in this embodiment, has the advantage that it is more insensitive to quantization than is the usual definition. The resulting algorithm is quite reliable, and the shadow removal step produces good shadowfree colour image results whenever strong shadow edges are present in the image. In most cases studied, entropy has a strong minimum for the invariant direction, revealing a new property of image formation.
Shadow removal REF ] proposed a method of detecting shadows by recovering a one-dimensional illuminationinvariant image by entropy minimization.
3353774
Entropy Minimization for Shadow Removal
{ "venue": "International Journal of Computer Vision", "journal": "International Journal of Computer Vision", "mag_field_of_study": [ "Computer Science", "Mathematics" ] }
Visual object recognition requires the matching of an image with a set of models stored in memory. In this paper, we propose an approach to recognition in which a 3-D object is represented by the linear combination of 2-D images of the object. IfJLk{M1,." .Mk} is the set of pictures representing a given object and P is the 2-D image of an object to be recognized, then P is considered to be an instance of M if P= C~=,aiMi for some constants (pi. We show that this approach handles correctly rigid 3-D transformations of objects with sharp as well as smooth boundaries and can also handle nonrigid transformations. The paper is divided into two parts. In the first part, we show that the variety of views depicting the same object under different transformations can often be expressed as the linear combinations of a small number of views. In the second part, we suggest how this linear combination property may be used in the recognition process. Index Terms-Alignment, linear combinations, object recognition, recognition, 3-D object recognition, visual recognition. A. Recognition by Alignment V ISUAL OBJECT recognition requires the matching of an image with a set of models stored in memory. Let M = {Ml, . , M,} be the set of stored models and P be the image to be recognized. In general, the viewed object, depicted by P, may differ from all the previously seen images of the same object. It may be, for instance, the image of a three-dimensional object seen from a novel viewing position. To compensate for these variations, we may allow the models (or the viewed object) to undergo certain compensating transformations during the matching stage. If 7 is the set of allowable transformations, the matching stage requires the selection of a model A4, E M and a transformation T E 7, such that the viewed object P and the transformed model TM, will be as close as possible. The general scheme is called the alignment approach since an alignment transformation is applied to the model (or to the viewed object) prior to, or during, the matching stage. Such an approach is used in [S] In this paper, we suggest a different approach, in which each model is represented by the linear combination of 2-D images of the object. The new approach has several advantages. First, it handles all the rigid 3-D transformations, but it is not restricted to such transformations. Second, there is no need in this scheme to explicitly recover and represent the 3-D structure of objects. Third, the computations involved are often simpler than in previous schemes. The paper is divided into two parts. In the first (Section I), we show that the variety of views depicting the same object under different transformations can often be expressed as the linear combinations of a small number of views. In the second part (Section II), we suggest how this linear combination property may be used in the recognition process. The modeling of objects using linear combinations of images is based on the following observation. For many continuous transformations of interest in recognition, such as 3-D rotation, translation, and scaling, all the possible views of the transforming object can be expressed simply as the linear combination of other views of the same object. The coefficients of these linear combinations often follow in addition to certain functional restrictions. In the next two sections, we show that the set of possible images of an object undergoing rigid 3-D transformations and scaling is embedded in a linear space and spanned by a small number of 2-D images. The images we will consider are 2-D edge maps produced in the image by the (orthographic) projection of the bounding contours and other visible contours on 3-D objects. We will make use of the following definitions. Given an object and a viewing direction, the rim is the set of all the points on the object's surface whose normal is perpendicular to the viewing direction [13]. This set is also called the contour generator [17]. A silhouette is an image generated by the orthographic projection of the rim. In the analysis below, we assume that every point along the silhouette is generated by a single rim 0162~8828/91$01.00 0 1991 IEEE
Ullman and Basri REF demonstrated that new views can be expressed as linear combinations of other views of the same scene.
8989489
Recognition by Linear Combinations of Models
{ "venue": null, "journal": null, "mag_field_of_study": [ "Computer Science" ] }
Abstract-With the introduction of network function virtualization technology, migrating entire enterprise data centers into the cloud has become a possibility. However, for a cloud service provider (CSP) to offer such services, several research problems still need to be addressed. In previous work, we have introduced a platform, called network function center (NFC), to study research issues related to virtualized network functions (VNFs). In an NFC, we assume VNFs to be implemented on virtual machines that can be deployed in any server in the CSP network. We have proposed a resource allocation algorithm for VNFs based on genetic algorithms (GAs). In this paper, we present a comprehensive analysis of two resource allocation algorithms based on GA for: 1) the initial placement of VNFs and 2) the scaling of VNFs to support traffic changes. We compare the performance of the proposed algorithms with a traditional integer linear programming resource allocation technique. We then combine data from previous empirical analyses to generate realistic VNF chains and traffic patterns, and evaluate the resource allocation decision making algorithms. We assume different architectures for the data center, implement different fitness functions with GA, and compare their performance when scaling over the time.
Rankothge et al. REF presented a resource allocation algorithm based on Genetic Algorithms to solve the VNF placement in a Data Center minimizing the usage of IT resources.
28867567
Optimizing Resource Allocation for Virtualized Network Functions in a Cloud Center Using Genetic Algorithms
{ "venue": "IEEE Transactions on Network and Service Management", "journal": "IEEE Transactions on Network and Service Management", "mag_field_of_study": [ "Computer Science" ] }
The seminal result of Impagliazzo and Rudich (STOC 1989) gave a black-box separation between one-way functions and public-key encryption: a public-key encryption scheme cannot be constructed using one-way functions in a black-box way. In addition, their result implied black-box separations between one-way functions and protocols for certain Secure Function Evaluation (SFE) functionalities (in particular, Oblivious Transfer). Surprisingly, however, since then there has been no further progress in separating oneway functions and SFE functionalities. In this work, we present the complete picture for finite deterministic 2-party SFE functionalities, vis a vis one-way functions. We show that in case of semi-honest adversaries, one-way functions are black-box separated from all such SFE functionalities, except the ones which have unconditionally secure protocols (and hence do not rely on any computational hardness). In the case of active adversaries, a black-box one-way function is indeed useful for SFE, but we show that it is useful only as much as access to an ideal commitment functionality is useful. Technically, our main result establishes the limitations of random oracles for secure computation. We show that a twoparty deterministic functionality f has a secure protocol in the random oracle model that is (statistically) secure against semi-honest adversaries if and only if f has a protocol in the plain model that is (perfectly) secure against semi-honest adversaries. Further, in the case of active adversaries, a deterministic SFE functionality f has a (UC or standalone) statistically secure protocol in the random oracle model if * A full version of this paper is available at [36] and only if f has a (UC or standalone) statistically secure protocol in the commitment-hybrid model. Our proof is based on a "frontier analysis" of two-party protocols, combining it with (extensions of) the "independence learners" of Impagliazzo-Rudich/Barak-Mahmoody. We make essential use of a combinatorial property, originally discovered by Kushilevitz (FOCS 1989), of functions that have semi-honest secure protocols in the plain model (and hence our analysis applies only to functions of polynomialsized domains, for which such a characterization is known). Our result could be seen as a first step towards proving a conjecture that we put forth in this work and call it the Many-Worlds Conjecture. For every 2-party SFE functionality f , one can consider a "world" where f can be semi-honest securely realized in the computational setting. Many-Worlds Conjecture states that there are infinitely many "distinct worlds" between minicrypt and cryptomania in the universe of Impagliazzo's Worlds.
Mahmoody et al. REF consider semi-honest, deterministic functionalities with polynomial-sized domains and show that any such functionality which can be realized in the random oracle model is "trivial" in the same sense as above.
1930181
Limits of random oracles in secure computation
{ "venue": "ITCS '14", "journal": null, "mag_field_of_study": [ "Computer Science", "Mathematics" ] }
Abstract. Though research on the Semantic Web has progressed at a steady pace, its promise has yet to be realized. One major difficulty is that, by its very nature, the Semantic Web is a large, uncensored system to which anyone may contribute. This raises the question of how much credence to give each source. We cannot expect each user to know the trustworthiness of each source, nor would we want to assign top-down or global credibility values due to the subjective nature of trust. We tackle this problem by employing a web of trust, in which each user maintains trusts in a small number of other users. We then compose these trusts into trust values for all other users. The result of our computation is not an agglomerate "trustworthiness" of each user. Instead, each user receives a personalized set of trusts, which may vary widely from person to person. We define properties for combination functions which merge such trusts, and define a class of functions for which merging may be done locally while maintaining these properties. We give examples of specific functions and apply them to data from Epinions and our BibServ bibliography server. Experiments confirm that the methods are robust to noise, and do not put unreasonable expectations on users. We hope that these methods will help move the Semantic Web closer to fulfilling its promise.
Richardson et al. enable users to maintain trust for other users and provide functions to merge these values into trust values for all users by leveraging the path of trust between users REF .
13034456
Trust Management for the Semantic Web
{ "venue": "International Semantic Web Conference", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
Abstract-Three automatic test case generation algorithms intended to test the resource allocation mechanisms of telecommunications software systems are introduced. Although these techniques were specifically designed for testing telecommunications software, they can be used to generate test cases for any software system that is modelable by a Markov chain provided operational profile data can either be collected or estimated. These algorithms have been used successfully to perform load testing for several real industrial software systems. Experience generating test suites for five such systems is presented. Early experience with the algorithms indicate that they are highly effective at detecting subtle faults that would have been likely to be missed if load testing had been done in the more traditional way, using hand-crafted test cases. A domain-based reliability measure is applied to systems after the load testing algorithms have been used to generate test data. Data are presented for the same five industrial telecommunications systems in order to track the reliability as a function of the degree of system degradation experienced.
REF present techniques for generating test cases that apply to software that can be modeled by Markov chains, provided that operational profile data is available.
40255849
The automatic generation of load test suites and the assessment of the resulting software
{ "venue": "IEEE Trans. Software Eng.", "journal": "IEEE Trans. Software Eng.", "mag_field_of_study": [ "Computer Science" ] }
Abstract. Distributed Human Computation (DHC) is used to solve computational problems by incorporating the collaborative effort of a large number of humans. It is also a solution to AI-complete problems such as natural language processing. The Semantic Web with its root in AI has many research problems that are considered as AI-complete. E.g. co-reference resolution, which involves determining whether different URIs refer to the same entity, is a significant hurdle to overcome in the realisation of large-scale Semantic Web applications. In this paper, we propose a framework for building a DHC system on top of the Linked Data Cloud to solve various computational problems. To demonstrate the concept, we are focusing on handling the co-reference resolution when integrating distributed datasets. Traditionally machine-learning algorithms are used as a solution for this but they are often computationally expensive, error-prone and do not scale. We designed a DHC system named iamResearcher, which solves the scientific publication author identity coreference problem when integrating distributed bibliographic datasets. In our system, we aggregated 6 million bibliographic data from various publication repositories. Users can sign up to the system to audit and align their own publications, thus solving the co-reference problem in a distributed manner. The aggregated results are dereferenceable in the Open Linked Data Cloud.
Reference REF proposes a human resolution system where authors can claim their own publications.
5432442
Distributed human computation framework for linked data co-reference resolution
{ "venue": "ESWC", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
An ad-hoc network is the cooperative engagement of a collection of (typically wireless) mobile nodes without the required intervention of any centralized access point or existing infrastructure. To provide optimal communication ability, a routing protocol for such a dynamic self-starting network must be capable of unicast, broadcast, and multicast. In this paper we extend Ad-hoc On-Demand Distance Vector Routing (AODV), an algorithm for the operation of such ad-hoc networks, to offer novel multicast capabilities which follow naturally from the way AODV establishes unicast routes. AODV builds multicast trees as needed (i.e., on-demand) to connect multicast group members. Control of the multicast tree is distributed so that there is no single point of failure. AODV provides loop-free routes for both unicast and multicast, even while repairing broken links. We include an evaluation methodology and simulation results to validate the correct and efficient operation of the AODV algorithm.
Multicast ad hoc on demand distance vector (MAODV) REF has a multicast group leader for managing a group.
2663477
Multicast operation of the ad-hoc on-demand distance vector routing protocol
{ "venue": "MobiCom '99", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
Given a single input rainy image, our goal is to visually remove rain streaks and the veiling effect caused by scattering and transmission of rain streaks and rain droplets. We are particularly concerned with heavy rain, where rain streaks of various sizes and directions can overlap each other and the veiling effect reduces contrast severely. To achieve our goal, we introduce a scale-aware multi-stage convolutional neural network. Our main idea here is that different sizes of rain-streaks visually degrade the scene in different ways. Large nearby streaks obstruct larger regions and are likely to reflect specular highlights more prominently than smaller distant streaks. These different effects of different streaks have their own characteristics in their image features, and thus need to be treated differently. To realize this, we create parallel sub-networks that are trained and made aware of these different scales of rain streaks. To our knowledge, this idea of parallel sub-networks that treats the same class of objects according to their unique sub-classes is novel, particularly in the context of rain removal. To verify our idea, we conducted experiments on both synthetic and real images, and found that our method is effective and outperforms the state-of-the-art methods.
A multi-stage network which consists of several parallel sub-networks was designed to model and remove rain streaks of various size REF .
21982789
Single Image Deraining using Scale-Aware Multi-Stage Recurrent Network
{ "venue": "ArXiv", "journal": "ArXiv", "mag_field_of_study": [ "Computer Science" ] }
Most approaches in predicting protein function from proteinprotein interaction data utilize the observation that a protein often share functions with proteins that interacts with it (its level-1 neighbours). However, proteins that interact with the same proteins (i.e. level-2 neighbours) may also have a greater likelihood of sharing similar physical or biochemical characteristics. We speculate that two separate forms of functional association accounts for such a phenomenon, and a protein is likely to share functions with its level-1 and/or level-2 neighbours. We are interested to find out how significant is functional association between level-2 neighbours and how they can be exploited for protein function prediction. We made a statistical study on recent interaction data and observed that functional association between level-2 neighbours is clearly observable. A substantial number of proteins are observed to share functions with level-2 neighbours but not with level-1 neighbours. We develop an algorithm that predicts the functions of a protein in two steps: (1) assign a weight to each of its level-1 and level-2 neighbours by estimating its functional similarity with the protein using the local topology of the interaction network as well as the reliability of experimental sources; (2) scoring each function based on its weighted frequency in these neighbours. Using leave-one-out cross validation, we compare the performance of our method against that of several other existing approaches and show that our method performs well.
Indirect neighbors method REF assumes that proteins interacting with the same proteins may also have some similar functions, It exploits both indirect and immediate neighbors to rank each candidate function.
7722647
null
null
Power management and competitiveness Problem setting: In a general scenario, we are given a device that always resides in one of several states. In review articles addition to the active state there can be, for instance, standby, suspend, sleep, and full-off states. These states have individual power consumption rates. The energy incurred in transitioning the system from a higherpower to a lower-power state is usually negligible. However, a power-up operation consumes a significant amount of energy. Over time, the device experiences an alternating sequence of active and idle periods. During active periods, the system must reside in the active mode to perform the required tasks. During idle periods, the system may be moved to lower-power states. An algorithm has to decide when to perform the transitions and to which states to move. The goal is to minimize the total energy consumption. As the energy consumption during the active periods is fixed, assuming that prescribed tasks have to be performed, we concentrate on energy minimization in the idle intervals. In fact, we focus on any idle period and optimize the energy consumption in any such time window. This power management problem is an online problem, that is, at any time a device is not aware of future events. More specifically, in an idle period, the system has no information when the period ends. Is it worthwhile to move to a lower-power state and benefit from the reduced energy consumption, given that the system must finally be powered up again at a cost to the active mode? Performance analysis: Despite the handicap of not knowing the future, an online strategy should achieve a provably good performance. Here the algorithms community resorts to com petitive analysis, where an online algorithm ALG is compared to an optimal offline algorithm OPT. 38 OPT is an omniscient strategy that knows the entire future and can compute a state transition schedule of minimum total energy. Online algorithm ALG is called c-competitive if for every input, such as, for any idle period, the total energy consumption of ALG is at most c times that of OPT. Competitive analysis provides a strong worst-case performance guarantee. An online strategy must perform well on all inputs (idle periods) that might even be generated by an adversary. This adversarial scenario may seem pessimistic but it is consistent with classical algorithm analysis that evaluates strategies in terms of their worst-case resources, typically running time or memory requirements. In this section, we will mostly study algorithms using competitive analysis but will also consider performance on inputs that are generated according to probability distributions. In the following, we will first study systems that consist of two states only. Then we will address systems with multiple states. We stress that we consider the minimization of energy. We ignore the delay that arises when a system is transitioned from a lowerpower to a higher-power state. Consider a two-state system that may reside in an active state or in a sleep state. Let r be the power consumption rate, measured in energy units per time unit, in the active state. The power consumption rate in the sleep mode is assumed to be 0. The results we present in the following generalize to an arbitrary consumption rate in the sleep mode. Let b energy units, where b > 0, be required to transition the system from the sleep state to the active state. We assume that the energy of transitioning from the active to the sleep state is 0. If this is not the case, we can simply fold the corresponding energy into the cost b incurred in the next power-up operation. The system experiences an idle period whose length T is initially unknown. An optimal offline algorithm OPT, knowing T in advance, is simple to formulate. We compare the value of rT, which is the total energy consumed during the idle period when residing in the active mode, to the power-up cost of b. If rT < b, OPT remains in the active state throughout the idle period as transitioning between the active and sleep modes costs more. If rT ³ b, using the sleep mode is beneficial. In this case OPT transitions to the sleep state right at the beginning of the idle period and powers up to the active state at the end of the period. The following deterministic online algorithm mimics the behavior of OPT, which uses the sleep mode on idle periods of length at least b/r. energy has become a leading design constraint for computing devices. hardware engineers and system designers explore new directions to reduce energy consumption of their products. Consider a system with l states s 1 , …, s l . Let r i be the power consumption rate of s i . We number the states in order of decreasing rates, such as, r 1 > … > r l . Hence s 1 is the active state and s l represents the state with lowest energy consumption. Let b i be the energy required to transition the system from s i to the active state s 1 . As transitions from lower-power states are more expensive we have b 1 £ … £ b l . Moreover, obviously, b 1 = 0. We assume again that transitions from higher-power to lower-power states incur 0 cost because the corresponding energy is usually negligible. The goal is to construct a state transition schedule minimizing the total energy consumption in an idle period. Irani et al. 24 presented online and offline algorithms. They assume that the transition energies are additive, such as, transitioning from a lowerpower state s j to a higher-power state s i , where i < j, incurs a cost of b j − b i . An algorithm aLG-d: In an idle period first remain in the active state. After b/r time units, if the period has not ended yet, transition to the sleep state. It is easy to prove that ALG-D is 2-competitive. We only need to consider two cases. If rT < b, then ALG-D consumes rT units of energy during the idle interval and this is in fact equal to the consumption of OPT. If rT ³ b, then ALG-D first consumes r . b/r = b energy units to remain in the active state. An additional power-up cost of b is incurred at the end of the idle interval. Hence, ALG-D's total cost is 2b, while OPT incurs a cost of b for the power-up operation at the end of the idle period. It is also easy to verify that no deterministic online algorithm can achieve a competitive ratio smaller than 2. If an algorithm transitions to the sleep state after exactly t time units, then in an idle period of length t it incurs a cost of tr + b while OPT pays min{rt, b} only. We remark that power management in two-state systems corresponds to the famous ski-rental problem, a cornerstone problem in the theory of online algorithms, see, for example, Irani and Karlin. 26 Interestingly, it is possible to beat the competitiveness of 2 using randomization. A randomized algorithm transitions to the sleep state according to a probability density function p(t). The probability that the system powers down during the first t 0 time units of an idle period is ò 0 t 0 p(t)dt. Karlin et al. 28 determined the best probability distribution. The density function is the exponential function e rt/b , multiplied by the factor to ensure that p(t) integrated over the entire time horizon is 1, that is, the system is definitely powered down at some point. algorithm aLG-r: Transition to the sleep state according to the probability density function ALG-R achieves a considerably improved competitiveness, as compared to deterministic strategies. Results by Karlin et al. 28 imply that ALG-R attains a competitive ratio of , where e » 2.71 is the Eulerian number. More precisely, in any idle period the expected energy consumption of ALG-R is not more than times that of OPT. Again, is the best competitive ratio a randomized strategy can obtain. From a practical point of view, it is also instructive to study stochastic settings where the length of idle periods is governed by probability distributions. In practice, short periods might occur more frequently. Probability distributions can also model specific situations where either very short or very long idle periods are more likely to occur, compared to periods of medium length. Of course, such a probability distribution may not be known in advance but can be learned over time. In the following, we assume that the distribution is known. Let Q = (q(T) ) 0£T<∞ be a fixed probability distribution on the length T of idle periods. For any t ³ 0, consider the deterministic algorithm ALG t that always powers down after exactly t time units. If the idle period ends before ALG t powers down, such as, if T < t, then the algorithm remains in the active state for the duration of the idle interval and uses an energy of rT. If the idle period has not yet ended when ALG t powers down, such as, if T ³ t, then the algorithm incurs a fixed energy of rt + b because an energy of rt is consumed before the system in powered down and a cost b is incurred to transition back to the active mode. In order to determine the expected cost of ALG t , we have to integrate over all possibilities for the length T of the idle period using the probability distribution Q. The two terms in the expression below represent the two cases. Note that the probability that the idle period has not yet ended when ALG t powers down is (1) Karlin et al. 28 proposed the following strategy that, given Q, simply uses the best algorithm ALG t . algorithm aLG-P: Given a fixed Q, let A Q * be the deterministic algorithm ALG t that minimizes Equation 1. Karlin et al. proved that for any Q, the expected energy consumption of Interestingly, for variable T the optimal cost has a simple graphical representation, see Figure 1 . If we consider all linear functions f i (t) = r i t + b i , representing the total energy consumption using state s i , then the optimum energy consumption is given by the lowerenvelope of the arrangement of lines. One can use this lower-envelope to guide an online algorithm to select which state to use at any time. Let S OPT (t) denote the state used by OPT in an idle period of total length t, such as, S OPT (t), that is, the solution to the equation Here we assume that states whose functions do not occur on the lower-envelope, at any time, are discarded. We remark that the algorithm is a generalization of ALG-D for twostate systems. Irani et al. 24 proved that Lower-Envelope is 2-competitive. This is the best competitiveness a deterministic algorithm can achieve in arbitrary state systems. Irani et al. 24 also studied the setting where the length of idle periods is generated by a probability distribution Q = (q(T) ) 0£T<∞ . They determine the time t i when an online strategy should move from state s i−1 to s i , 2 £ i £ l. To this end consider the deterministic online algorithm ALG t that transitions to the lower-power state after exactly t time units. We determine the expected cost of ALG t in an idle period whose length T is generated according to Q, assuming that only states s i−1 and s i are available. Initially ALG t resides in state s i−1 . If the idle period ends before ALG t transitions to the lower-power state, such as, if T < t, then the energy consumption is r i−1 T. If the idle period has not ended yet when ALG t transitions to the lower-power state, such as, if T ³ t, the algorithm incurs an energy r i−1 t while residing in s i−1 during the first t time units and an additional energy of r i (T − t) when in state s i during the remaining T − t time units. At the end of the idle period, a power-up cost of b i − b i−1 is paid to transition from s i back to s i−1 . Hence, in this case ALG t incurs a total energy of r i−1 t The expected cost of ALG t , assuming that only s i−1 and s i are available, is Let t i be the time t that minimizes the above expression. Irani et al. 24 proposed the following algorithm. Algorithm ALG-P(l): Change states at the transition times t 2 ,…,t l defined above. ALG-P(l) is a generalization of ALG-P for two-state systems. Irani et al. proved that for any fixed probability distribution Q, the expected energy consumption of ALG-P(l) is no more than times the expected optimum consumption. Furthermore, Irani et al. presented an approach for learning an initially unknown Q. They combined the approach with ALG-P(l) and performed experimental tests for an IBM mobile hard drive with four power states. It shows that the combined scheme achieves low energy consumptions close to the optimum and usually outperforms many single-value prediction algorithms. Augustine et al. 5 investigate generalized multistate systems in which the state transition energies may take arbitrary values. Let b ij ³ 0 be the energy required to transition from s i to s j , 1 £ i, j £ l. Augustine et al. demonstrate that LowerEnvelope can be generalized and achieves a competitiveness of 3 + 2Ö2  5.8. This ratio holds for any state system. Better bounds are possible for specific systems. Augustine et al. devise a strategy that, for a given system S, achieves a competitive ratio arbitrarily close to the best competitiveness c* possible for S. Finally, the authors consider stochastic settings and develop optimal state transition times. Many modern microprocessors can run at variable speed. Examples are the Intel SpeedStep and the AMD processor PowerNow. High speeds result in higher performance but also high energy consumption. Lower speeds save energy but performance degrades. The well-known cube-root rule for CMOS devices states that the speed s of a device is proportional to the cube-root of the power or, equivalently, the power is proportional to s 3 . The algorithms literature considers a generalization of this rule. If a processor runs at speed s, then the required power is s a , where a > 1 is a constant. Obviously, energy consumption is power integrated over time. The goal is to dynamically set the speed of a processor so as to minimize energy consumption, while still providing a desired quality of service. Dynamic speed scaling leads to many challenging scheduling problems. At any time a scheduler has to decide not only which job to execute but also which speed to use. Consequently, there has been considerable research interest in the design and analysis of efficient scheduling algorithms. This section reviews the most important results developed over the past years. We first address scheduling problems with hard job deadlines. Then we consider the minimization of response times and other objectives. In general, two scenarios are of interest. In the offline setting, all jobs to be processed are known in advance. In the online setting, jobs arrive over time, and an algorithm, at any time, has to make scheduling decisions without knowledge of any future jobs. Online strategies are evaluated again using competitive analysis. Online algorithm ALG is c-competitive if, for every input, the objective function value (typically the energy consumption) of ALG is within c times the value of an optimal solution. In a seminal paper, initiating the algorithmic study of speed scaling, Yao et al. 40 investigated a scheduling problem with strict job deadlines. At this point, this framework is by far the most extensively studied algorithmic speed scaling problem. Consider n jobs J 1 ,…, J n that have to be processed on a variable-speed processor. Each job J i is specified by a release time r i , a deadline d i , and a processing volume w i . The release time and the deadline mark the time interval in which the job must be executed. The processing volume is the amount of work that must be done to complete the job. Intuitively, the processing volume can be viewed as the number of CPU cycles necessary to finish the job. The processing time of a job depends on the speed. If J i is executed at constant speed s, it takes w i /s time units to complete the job. Preemption of jobs is allowed, that is, the processing of a job may be suspended and resumed later. The goal is to construct a feasible schedule minimizing the total energy consumption. The framework by Yao et al. assumes there is no upper bound on the maximum processor speed. Hence there always exists a feasible schedule satisfying all job deadlines. Furthermore, it is assumed that a continuous spectrum of speeds is available. We will discuss later how to relax these assumptions. Fundamental algorithms: Yao et al. 40 first study the offline setting and develop an algorithm for computing optimal solutions, minimizing total energy consumption. The strategy is known as YDS referring to the initials of the authors. The algorithm proceeds in a series of iterations. In each iteration, a time interval of maximum density is identified and a corresponding partial schedule is constructed. Loosely speaking, the density of an interval I is the minimum average speed necessary to complete all jobs that must be scheduled in I. A high density requires a high speed. Formally, the density D I of a time interval I = [t, t¢] is the total work to be completed in I divided by the length of I. More precisely, let S I be the set of jobs J i that must be processed in I because their release time and deadline are in I, such as, Algorithm YDS repeatedly determines the interval I of maximum density. In such an interval I, the algorithm schedules the jobs of S I at speed D I using the Earliest Deadline First (EDF) policy. This well-known policy always executes the job having the earliest deadline, among the available unfinished jobs. After this assignment, YDS removes set S I as well as time interval I from the problem instance. More specifically, for any unscheduled job J i whose deadline is in the interval I, the new deadline is set to the beginning of I because the time window I is not available anymore for the processing of J i . Formally, for any J i with d i Î I, the new deadline time is set to d i := t. Similarly, for any unscheduled J i whose release time is in I, the new release time is set to the end of I. Again, formally for any J i with r i Î I, the new release time is r i := t¢. Time interval I is discarded. This process repeats until there are no more unscheduled jobs. We give a summary of the algorithm in pseudocode. review articles the horizontal axis. In the first iteration YDS identifies I 1 = [3, 8] as interval of maximum density, along with set S I 1 = { J 2 , J 3 }. In I 1 , the red job J 2 is preempted at time 5 to give preference to the orange job J 3 having an earlier deadline. In the second iteration I 2 = [13, 20] is the maximum density interval. The dark green and light green jobs are scheduled; preemption is again used once. In the third iteration, the remaining job J 3 is scheduled in the available time slots. Obviously, when identifying intervals of maximum density, YDS only has to consider intervals whose boundaries are equal to the release times and deadlines of the jobs. A straightforward implementation of the algorithm has a running time of O(n 3 ). Li et al. 34 showed that the time can be reduced to O(n 2 log n). Further improvements are possible if the job execution intervals form a tree structure. If the given job instance is not feasible, the situation is more delicate. In this case it is impossible to complete all the jobs. The goal is to design algorithms that achieve good throughput, which is the total processing volume of jobs finished by their deadline, and at the same time optimize energy consumption. Papers 7, 17 present algorithms that even work online. At any time the strategies maintain a pool of jobs they intend to complete. Newly arriving jobs may be admitted to this pool. If the pool contains too large a processing volume, jobs are expelled such that the throughput is not diminished significantly. The algorithm by Bansal et al. 7 is 4-competitive in terms of throughput and constant competitive with respect to energy consumption. Temperature minimization: High processor speeds lead to high temperatures, which impair a processor's reliability and lifetime. Bansal et al. 9 consider the minimization of the maximum temperature that arises during processing. They assume that cooling follows Newton's Law, which states that the rate of cooling of a body is proportional to the difference in temperature between the body and the environment. Bansal et al. 9 show that algorithms YDS and BKP have favorable properties. For any jobs sequence, the maximum temperature is within a constant factor of the minimum possible optimal schedule for the currently available unfinished jobs. Bansal et al. 9 gave a comprehensive analysis of the above algorithm and proved that the competitive ratio is exactly a a . Hence, in terms of competitiveness, Optimal Available is better than Average Rate. Bansal et al. 9 also presented a new online algorithm, called BKP according to the initials of the authors, which can be viewed as approximating the optimal speeds of YDS in an online manner. Again, the algorithm considers interval densities. For times t, t 1 , and t 2 with t 1 < t £ t 2 , let w(t, t 1 , t 2 ) be the total processing volume of jobs that have arrived by time t, have a release time of at least t 1 and a deadline of at most t 2 . Then, intuitively, max t 1 , t 2 w(t, t 1 , t 2 )/(t 2 − t 1 ) is an estimate of the speed used by YDS, based on the knowledge of jobs that have arrived by time t. The new algorithm BKP approximates this speed by considering specific time windows [et − (e − 1)t¢, t¢], for t¢ > t, of length e(t¢ − t). The corresponding necessary speed is then multiplied by a factor of e. algorithm BKP: At any time t use a speed of e . s(t), where Available unfinished jobs are processed using EDF. Bansal et al. 9 proved that BKP achieves a competitive ratio of , which is better than the competitiveness of Optimal Available for large values of a. All the above online algorithms attain constant competitive ratios that depend on a and no other problem parameter. The dependence on a is exponential. For small values of a, which occur in practice, the competitive ratios are reasonably small. A result by Bansal et al. 9 implies that the exponential dependence on a is inherent to the problem. Any randomized online algorithm has a competitiveness of at least W( (4/3) a ). refinements-Bounded speed: The problem setting considered so far assumes a continuous, unbounded spectrum of speeds. However, in practice only a finite set of discrete speed levels s 1 < s 2 < ... < s d is available. The 23 investigate an extended problem setting where a variable-speed processor may be transitioned into a sleep state. In the sleep state, the energy consumption is 0 while in the active state even at speed 0 some non-negative amount of energy is consumed. Hence, Irani et al. 23 combine speed scaling with power-down mechanisms. In the standard setting without sleep state, algorithms tend to use low speed levels subject to release time and deadline constraints. In contrast, in the setting with sleep state it can be beneficial to speed up a job so as to generate idle times in which the processor can be transitioned to the sleep mode. Irani et al. 23 develop online and offline algorithms for this extended setting. Baptiste et al. 11 and Demaine et al. 21 also study scheduling problems where a processor may be set asleep, albeit in a setting without speed scaling. minimizing Response time A classical objective in scheduling is the minimization of response times. A user releasing a task to a system would like to receive feedback, say the result of a computation, as quickly as possible. User satisfaction often depends on how fast a device reacts. Unfortunately, response time minimization and energy minimization are contradicting objectives. To achieve fast response times, a system must usually use high processor speeds, which lead to high energy consumption. On the other hand, to save energy, low speeds should be used, which result in high response times. Hence, one has to find ways to integrate both objectives. Consider n jobs J 1 , …, J n that have to be scheduled on a variable-speed processor. Each job J i is specified by a release time r i and a processing volume w i . When a job arrives, its processing volume is known. Preemption of 37 study a problem setting where a fixed energy volume E is given and the goal is to minimize the total flow time of the jobs. The authors assume all jobs have the same processing volume. By scaling, we can assume all jobs have unitsize. Pruhs et al. 37 consider the offline scenario where all the jobs are known in advance and show that optimal schedules can be computed in polynomial time. However, in this framework with a limited energy volume it is difficult to construct good online algorithms. If future jobs are unknown, it is unclear how much energy to invest for the currently available tasks. energy plus flow times: Albers and Fujiwara 2 proposed another approach to integrate energy and flow time minimization. They consider a combined objective function that simply adds the two costs. Let E denote the energy consumption of a schedule. We wish to minimize . By multiplying either the energy or the flow time by a scalar, we can also consider a weighted combination of the two costs, expressing the relative value of the two terms in the total cost. Albers and Fujiwara concentrate on unit-size jobs and show that optimal offline schedules can be constructed in polynomial time using a dynamic programming approach. In fact the algorithm can also be used to minimize the total flow time of jobs given a fixed energy volume. Bansal et al. and Lam et al. 7, 32 propose algorithms for the setting that there is an upper bound on the maximum processor speed. All the above results assume that when a job arrives, its processing volume is known. Papers 18, 32 investigate the harder case that this information is not available. The results presented so far address single-processor architectures. However, energy consumption is also a major concern in multiprocessor environments. Currently, relatively few results are known. Albers et al. 3 investigate deadline-based scheduling on m identical parallel processors. The goal is to minimize the total energy on all the machines. The authors first settle the complexity of the offline problem by showing that computing optimal schedules is NP-hard, even for unit-size jobs. Hence, unless P = NP, optimal solutions cannot be computed efficiently. Albers et al. 3 then develop polynomial time offline algorithms that achieve constant factor approximations, such as, for any input the consumed energy is within a constant factor of the true optimum. They also devise online algorithms attaining constant competitive ratios. Lam et al. 30 study deadline-based scheduling on two speed-bounded processors. They present a strategy that is constant competitive in terms of throughput maximization and energy minimization. Bunde 15 investigates flow time minimization in multiprocessor environments, given a fixed energy volume. He presents hardness results as well as approximation guarantees for unit-size jobs. Lam et al. 31 consider the objective function of minimizing energy plus flow times. They design online algorithms achieving constant competitive ratios. Makespan minimization: Another basic objective function in scheduling is makespan minimization, that is, the minimization of the point in time when the entire schedule ends. Bunde 15 assumes that jobs arrive over time and develops algorithms for single-and multiprocessor environments. Pruhs et al. 36 consider tasks having precedence constraints defined between them. They devise algorithms for parallel processors given a fixed energy volume. Wireless networks such as ad hoc networks and sensor networks have received considerable attention over the last few years. Prominent applications of such networks are habitat observation, environmental monitoring, and forecasting. Network nodes usually have very limited battery capacity so that effective energy management strategies are essential to improve the lifetime of a network. In this survey, we focus on two algorithmic problems that have received considerable interest in the research community recently. Moreover, these problems can be viewed as scheduling problems and hence are related to the topics addressed in the previous sections. network topologies Wireless ad hoc network do not have a fixed infrastructure. The network basically consists of a collection of radio stations with antennas for sending and receiving signals. During transmission a station s has to choose a transmission power P s , taking into account that the signal strength decreases over distance. The signal is successfully received by a station t only if P s /dist(s,t) a > g. Here dist(s,t) denotes the distance between s and t, coefficient a > 1 is the attenuation rate and g > 0 is a transmission quality parameter. In practice the attenuation rate is in the range between 2 and 5. Without loss of generality we may assume g = 1. In data transmission, a very basic operation is broadcast, where a given source node wishes to send a piece of information to all other nodes in the network. We study the problem of designing broadcast topologies allowing energy-efficient broadcast operations in wireless networks. Consider a set V of n nodes that are located in the real plane R 2 . A source node s Î V has to disseminate a message to all other nodes in the network. However s does not have to inform all v Î V directly. Instead nodes may serve as relay stations. If v receives the message and transmits it to w 1 , …, w k , then v has to use a power of P v = max 1£j£k dist(v, w j ) a . The goal is to find a topology, that is, a transmission schedule that minimizes the total power/energy E = S v ÎV P v incurred by all the nodes. Note that such a schedule corresponds to a tree T that is rooted at s and contains all the nodes of V. The children of a node v are the nodes to which v transfers the message. Clementi et al. 19 showed that the computation of optimal schedules is NP-hard. Therefore one resorts to approximations. An algorithm ALG achieves a c-approximation if for every input, such as, for every node set V, the solution computed by ALG incurs an energy consumption of no more than c times the optimum value. Wan et al. 39 investigate various algorithms in terms of their approximation guarantees. The most extensively studied strategy is MST. For a given node set V, MST computes a standard minimum spanning tree T, such as, a tree of minimum total edge length containing all the vertices of V (see, e.g., Cormen et al. 20 ). The tree is rooted at source node s. Data transmission is performed along the edges of T, that is, each node transmits a received message to all of its children in the tree. Intuitively, this algorithm is sensible because the small total edge length of a minimum spanning tree should lead to a small overall energy consumption. algorithm MsT: For a given V, compute a minimum spanning tree rooted at s. Any node v transmits a given message to all of its children in T. Many and Wan et al. 39 imply that the approximation ratio c of BIP satisfies 13/3 £ c £ 6. It would be interesting to develop tight bounds for this algorithm. As mentioned above, sensor networks are typically used to monitor an environment, measuring, e.g., temperature or a chemical value. The data has to be transferred to a designated sink node that may perform further actions. Becchetti et al. 13 and Korteweg et al. develop energy-efficient protocols for data aggregation. Suppose the transmission topology is given by a tree T rooted at the sink s. Data gathered at a network node v is transmitted along the path from v to s in T. Network nodes have the ability to combine data. If two or more data packets simultaneously reside at a node v, then v may merge these packets into a review articles single one and transfer it to the parent node, in the direction of s. The energy incurred by a network node is proportional to the number of packets sent. Becchetti et al. 13 assume that data items arrive over time. Each item i is specified by the node v i where the item arises, an arrival time r i and a deadline d i by which the data must reach the sink. The goal is to find a feasible transmission schedule minimizing the maximum energy required at any node. Becchetti et al. show that the offline problem is NP-hard and present a 2-approximation algorithm. They also develop distributed online algorithms for synchronous as well as asynchronous communication models. Korteweg et al. 29 study a problem variant where the data items do not have deadlines but should reach the sink with low latency. They present algorithms that simultaneously approximate energy consumption and latency, considering again various communication models. In this survey, we have reviewed algorithmic solutions to save energy. Another survey on algorithmic problems in power management was written by Irani and Pruhs. 27 Over the past months a large number of papers have been published, and we expect that energy conservation from an algorithmic point of view will continue to be an active research topic. There are many directions for future research. With respect to power-down mechanisms, for instance, it would be interesting to design strategies that take into account the latency that arises when a system is transitioned from a sleep state to the active state. Additionally, we need a better understanding of speed scaling techniques in multiprocessor environments as multicore architectures become more and more common not only in servers but also in desktops and laptops. Moreover, optimization problems in networks deserve further algorithmic investigation. At this point it would be interesting to study energy-efficient point-to-point communication, complementing the existing work on broadcast and data-aggregation protocols. Last but not least, the algorithms presented so far have to be analyzed in terms of their implementation and execution cost: how much extra energy is incurred in executing the algorithms in realistic environments. Susanne Albers is a professor in the department of computer science at humboldt university, berlin, germany.
Energy management of (not necessarily mobile) computational devices has been a major concern in recent research papers (cf. REF ).
16604460
Energy-efficient algorithms
{ "venue": "CACM", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
In an isolated power grid or a micro-grid with a small carbon footprint, the penetration of renewable energy is usually high. In such power grids, energy storage is important to guarantee an uninterrupted and stable power supply for end users. Different types of energy storage have different characteristics, including their round-trip efficiency, power and energy rating, selfdischarge, and investment and maintenance costs. In addition, the load characteristics and availability of different types of renewable energy sources vary in different geographic regions and at different times of year. Therefore joint capacity optimization for multiple types of energy storage and generation is important when designing this type of power systems. In this paper, we formulate a cost minimization problem for storage and generation planning, considering both the initial investment cost and operational/maintenance cost, and propose a distributed optimization framework to overcome the difficulty brought about by the large size of the optimization problem. The results will help in making decisions on energy storage and generation capacity planning in future decentralized power grids with high renewable penetrations. -Capacity planning, distributed optimization, energy storage, micro-grid, renewable energy sources. NOMENCLATURE: Set of different renewable generators Types of renewable generators Renewable generation per unit generation capacity during time period Renewable energy cost during time period Renewable generation during time period Maximum generation capacity Set of different energy storage types Types of energy storage Rated power/energy ratio One-way energy efficiency Energy storage capacity Energy loss ratio per unit time Index Terms
REF studies the problem of jointly optimizing multiple energy storage, renewable generator, and diesel generator capacities in the context of a micro-grid with a small carbon print.
7178911
Joint Optimization of Hybrid Energy Storage and Generation Capacity With Renewable Energy
{ "venue": "IEEE Transactions on Smart Grid", "journal": "IEEE Transactions on Smart Grid", "mag_field_of_study": [ "Mathematics", "Engineering", "Computer Science" ] }
We introduce the value iteration network (VIN): a fully differentiable neural network with a 'planning module' embedded within. VINs can learn to plan, and are suitable for predicting outcomes that involve planning-based reasoning, such as policies for reinforcement learning. Key to our approach is a novel differentiable approximation of the value-iteration algorithm, which can be represented as a convolutional neural network, and trained end-to-end using standard backpropagation. We evaluate VIN based policies on discrete and continuous path-planning domains, and on a natural-language based search task. We show that by learning an explicit planning computation, VIN policies generalize better to new, unseen domains.
A recent work, Value Iteration Networks (VIN) REF emulates value iteration by leveraging recurrent convolutional neural networks and max-pooling.
11374605
Value Iteration Networks
{ "venue": "Advances in Neural Information Processing Systems 29 pages 2154--2162, 2016", "journal": null, "mag_field_of_study": [ "Computer Science", "Mathematics" ] }
Protecting the privacy of individuals in graph structured data while making accurate versions of the data available is one of the most challenging problems in data privacy. Most efforts to date to perform this data release end up mired in complexity, overwhelm the signal with noise, and are not effective for use in practice. In this paper, we introduce a new method which guarantees differential privacy. It specifies a probability distribution over possible outputs that is carefully defined to maximize the utility for the given input, while still providing the required privacy level. The distribution is designed to form a 'ladder', so that each output achieves the highest 'rung' (maximum probability) compared to less preferable outputs. We show how our ladder framework can be applied to problems of counting the number of occurrences of subgraphs, a vital objective in graph analysis, and give algorithms whose cost is comparable to that of computing the count exactly. Our experimental study confirms that our method outperforms existing methods for counting triangles and stars in terms of accuracy, and provides solutions for some problems for which no effective method was previously known. The results of our algorithms can be used to estimate the parameters of suitable graph models, allowing synthetic graphs to be sampled.
Very recently, Zhang et al. REF propose a ladder framework to privately count the number of occurrences of subgraphs.
11424118
Private Release of Graph Statistics using Ladder Functions
{ "venue": "SIGMOD '15", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
The development of innovative solutions to complex problems has become increasingly challenging. The modern information systems (IS) development model includes the use of cross-functional teams, which comprise both users, such as accountants and salespeople, and IS professionals such as systems analysts and programmers. Team members must work together effectively to produce successful systems. In the past, IS departments perceived themselves as autonomous units that provided specific expertise to user departments. With the team approach, IS professionals are no longer autonomous but are equal members of a group of professionals, each with a specific contribution to make. Their responsibility is no longer independently to design an IS, but instead to carefully direct the users to design their own systems. Expected benefits of successful teams include increased motivation, greater task commitment, higher levels of performance, ability to withstand stress, more innovative solutions [1] , and decreased development time [2] . Research is currently underway to find appropriate measures for these factors so team effectiveness can be accurately evaluated [3] . One example of the use of teams in the IS development process is the steering committee, a team composed of the heads of major departments in the organization. In one study, 71 per cent of the respondents reported using a steering committee to determine which new systems would be developed. Almost 83 per cent of these were either satisfied (66.8 per cent) or very satisfied (16 per cent) with the steering committee's performance [4] . While these results suggest the popularity of the team approach to IS planning, the finding that only 16 per cent were very satisfied with the performance is not an overwhelmingly positive evaluation of their effectiveness. If the team approach is truly preferred, as the team-building literature proposes, then one would expect a higher level of satisfaction with team performance. Ineffective teams may be the product of inappropriate team composition. Deciding to use a team approach is only the first step. Great care must be exercised in building the team to ensure its ultimate effectiveness. There are a number of pitfalls involving group dynamics that can undermine a team's effectiveness [5] . This paper proposes a model of the impact of the personalitytype composition of a team on overall team performance. The model applies personality-type theory to the team-building process and then illustrates the importance of this theory by evaluating a case example of two software development teams. One of the teams was considered to be very productive by
Bradley and Hebert REF propose a model of the impact of the personality-type composition of a team on overall team performance.
18415782
The effect of personality type on team performance
{ "venue": null, "journal": "Journal of Management Development", "mag_field_of_study": [ "Psychology" ] }
Neural machine translation is a recently proposed approach to machine translation. Unlike the traditional statistical machine translation, the neural machine translation aims at building a single neural network that can be jointly tuned to maximize the translation performance. The models proposed recently for neural machine translation often belong to a family of encoder-decoders and encode a source sentence into a fixed-length vector from which a decoder generates a translation. In this paper, we conjecture that the use of a fixed-length vector is a bottleneck in improving the performance of this basic encoder-decoder architecture, and propose to extend this by allowing a model to automatically (soft-)search for parts of a source sentence that are relevant to predicting a target word, without having to form these parts as a hard segment explicitly. With this new approach, we achieve a translation performance comparable to the existing state-of-the-art phrase-based system on the task of English-to-French translation. Furthermore, qualitative analysis reveals that the (soft-)alignments found by the model agree well with our intuition.
In REF , attention mechanism was proposed to extend the basic encoder-decoder architecture.
11212020
Neural Machine Translation by Jointly Learning to Align and Translate
{ "venue": "ICLR 2015", "journal": "arXiv: Computation and Language", "mag_field_of_study": [ "Computer Science", "Mathematics" ] }
. Material appearance such as that of leather is usually reproduced with microfacet models in computer graphics. A more realistic result is achieved by adding a thin-film coating that produces iridescent colors [Akin 2014] . We replace the classic Fresnel reflectance term with a new Airy reflectance term that accounts for iridescence due to thin-film interference. Our main contribution consists in an analytical integration of the high-frequency spectral oscillations exhibited by Airy reflectance, which is essential for practical rendering in RGB. For the leather material on the model, we used a thin film of index η 2 = 1.3 and thickness d = 290nm, over a rough dielectric base material (α = 0.2, η 3 = 1). When the scene is rotated, goniochromatic e ects such as subtle purple colors may be observed at grazing angles. In this work, we introduce an extension to microfacet theory for the rendering of iridescent e ects caused by thin-lms of varying thickness (such as oil, grease, alcohols, etc) on top of an arbitrarily rough base layer. Our material model is the rst to produce a consistent appearance between tristimulus (e.g., RGB) and spectral rendering engines by analytically pre-integrating its spectral response. The proposed extension works with any microfacet-based model: not only on re ection over dielectrics or conductors, but also on transmission through dielectrics. We adapt its evaluation to work in multiscale rendering contexts, and we expose parameters enabling artistic control over iridescent appearance. The overhead compared to using the classic Fresnel re ectance or transmittance terms remains reasonable enough for practical uses in production.
Lately, REF extended microfacet-based models to recreate thin-film interference.
3539206
A practical extension to microfacet theory for the modeling of varying iridescence
{ "venue": "TOGS", "journal": null, "mag_field_of_study": [ "Mathematics", "Computer Science" ] }
We train a neural machine translation (NMT) system to both translate sourcelanguage text and copy target-language text, thereby exploiting monolingual corpora in the target language. Specifically, we create a bitext from the monolingual text in the target language so that each source sentence is identical to the target sentence. This copied data is then mixed with the parallel corpus and the NMT system is trained like normal, with no metadata to distinguish the two input languages. Our proposed method proves to be an effective way of incorporating monolingual data into low-resource NMT. On Turkish↔English and Romanian↔English translation tasks, we see gains of up to 1.2 BLEU over a strong baseline with back-translation. Further analysis shows that the linguistic phenomena behind these gains are different from and largely orthogonal to back-translation, with our copied corpus method improving accuracy on named entities and other words that should remain identical between the source and target languages.
Currey et al. REF copied the target monolingual data to the source side and used the copied data for training NMT.
2147407
Monolingual Data Improves Low-Resource Neural Machine Translation
null
A key challenge in benchmarking is to predict the performance of an application of interest on a number of platforms in order to determine which platform yields the best performance. This paper proposes an approach for doing this. We measure a number of microarchitecture-independent characteristics from the application of interest, and relate these characteristics to the characteristics of the programs from a previously profiled benchmark suite. Based on the similarity of the application of interest with programs in the benchmark suite, we make a performance prediction of the application of interest. We propose and evaluate three approaches (normalization, principal components analysis and genetic algorithm) to transform the raw data set of microarchitecture-independent characteristics into a benchmark space in which the relative distance is a measure for the relative performance differences. We evaluate our approach using all of the SPEC CPU2000 benchmarks and real hardware performance numbers from the SPEC website. Our framework estimates per-benchmark machine ranks with a 0.89 average and a 0.80 worst case rank correlation coefficient.
REF uses runtime-related microarchitecture-independent characteristics in the prediction process.
1119864
Performance prediction based on inherent program similarity
{ "venue": "PACT '06", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
Libraries have traditionally used manual image annotation for indexing and then later retrieving their image collections. However, manual image annotation is an expensive and labor intensive procedure and hence there has been great interest in coming up with automatic ways to retrieve images based on content. Here, we propose an automatic approach to annotating and retrieving images based on a training set of images. We assume that regions in an image can be described using a small vocabulary of blobs. Blobs are generated from image features using clustering. Given a training set of images with annotations, we show that probabilistic models allow us to predict the probability of generating a word given the blobs in an image. This may be used to automatically annotate and retrieve images given a word as a query. We show that relevance models. allow us to derive these probabilities in a natural way. Experiments show that the annotation performance of this cross-media relevance model is almost six times as good (in terms of mean precision) than a model based on word-blob co-occurrence model and twice as good as a state of the art model derived from machine translation. Our approach shows the usefulness of using formal information retrieval models for the task of image annotation and retrieval.
Jeon et al. REF instead assumed that this could be viewed as analogous to the cross-lingual retrieval problem and used a Cross Media Relevance Model (CMRM) to perform both image annotation and ranked retrieval.
14303727
Automatic image annotation and retrieval using cross-media relevance models
{ "venue": "SIGIR '03", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
Abstract-We consider a wireless device-to-device (D2D) network where communication is restricted to be single-hop. Users make arbitrary requests from a finite library of files and have pre-cached information on their devices, subject to a per-node storage capacity constraint. A similar problem has already been considered in an infrastructure setting, where all users receive a common multicast (coded) message from a single omniscient server (e.g., a base station having all the files in the library) through a shared bottleneck link. In this paper, we consider a D2D infrastructureless version of the problem. We propose a caching strategy based on deterministic assignment of subpackets of the library files, and a coded delivery strategy where the users send linearly coded messages to each other in order to collectively satisfy their demands. We also consider a random caching strategy, which is more suitable to a fully decentralized implementation. Under certain conditions, both approaches can achieve the information theoretic outer bound within a constant multiplicative factor. In our previous work, we showed that a caching D2D wireless network with one-hop communication, random caching, and uncoded delivery (direct file transmissions) achieves the same throughput scaling law of the infrastructure-based coded multicasting scheme, in the regime of large number of users and files in the library. This shows that the spatial reuse gain of the D2D network is order-equivalent to the coded multicasting gain of single base station transmission. It is, therefore, natural to ask whether these two gains are cumulative, i.e., if a D2D network with both local communication (spatial reuse) and coded multicasting can provide an improved scaling law. Somewhat counterintuitively, we show that these gains do not cumulate (in terms of throughput scaling law). This fact can be explained by noticing that the coded delivery scheme creates messages that are useful to multiple nodes, such that it benefits from broadcasting to as many nodes as possible, while spatial reuse capitalizes on the fact that the communication is local, such that the same time slot can be reused in space across the network. Unfortunately, these two issues are in contrast with each other.
Ji et al. REF discuss spatial reuse gain and coded multicast gain in D2D caching networks.
17432854
Fundamental Limits of Caching in Wireless D2D Networks
{ "venue": "IEEE Transactions on Information Theory", "journal": "IEEE Transactions on Information Theory", "mag_field_of_study": [ "Computer Science", "Mathematics" ] }
Abstrnd -Reliable data transport m wireless sensor networks b a multifaceted problem inlluenced by the physical, MAC, network, and transport layers. Because seosor networks are sub]& to strict resource constraints and are deployed by single organizations, they encourage revisiting traditional layering and are less bound by standardized placement of services such as reliability. This paper presents analysis and experiments resulting in specific recommendations for implementing reliable data transport in sensor nets. To explore reliability at the transport layer, we p e n t RMST (Reliable Multi-Segment Transport), a new transport layer for Directed Diffusion. RMST provides guaranteed delivery and fragmentationlmasembly for applications that require them RMST is a selective NACKbased protocol that can be mdigured for in-network caching and repair.
RMST REF (Reliable Multi-Segment Transport) adds reliable transport on top of Directed Diffusion.
108798407
RMST: reliable data transport in sensor networks
{ "venue": "Proceedings of the First IEEE International Workshop on Sensor Network Protocols and Applications, 2003.", "journal": "Proceedings of the First IEEE International Workshop on Sensor Network Protocols and Applications, 2003.", "mag_field_of_study": [ "Engineering" ] }
Abstract-The behavior of a software system often depends on how that system is configured. Small configuration errors can lead to hard-to-diagnose undesired behaviors. We present a technique (and its tool implementation, called ConfDiagnoser) to identify the root cause of a configuration error -a single configuration option that can be changed to produce desired behavior. Our technique uses static analysis, dynamic profiling, and statistical analysis to link the undesired behavior to specific configuration options. It differs from existing approaches in two key aspects: it does not require users to provide a testing oracle (to check whether the software functions correctly) and thus is fully automated; and it can diagnose both crashing and noncrashing errors. We evaluated ConfDiagnoser on 5 non-crashing configuration errors and 9 crashing configuration errors from 5 configurable software systems written in Java. On average, the root cause was ConfDiagnoser's fifth-ranked suggestion; in 10 out of 14 errors, the root cause was one of the top 3 suggestions; and more than half of the time, the root cause was the first suggestion.
Zhang and Ernst combines static analysis, dynamic profiling, and statistical analysis to detect problems in configuration files REF .
11610153
Automated diagnosis of software configuration errors
{ "venue": "ICSE '13", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
In a heterogeneous distributed environment where multiple applications compete and share a limited amount of system resources, applications tend to suffer from variations in resource availability, and are desired to adapt their behavior to the resource variations of the system beyond a minimum Quality of Service (QoS) guarantee. On one hand, current adaptation mechanisms built within an application have the disadvantage of lacking global information to preserve fairness among all applications. On the other hand, an adaptive resource management built within the operating system ignores the data semantics of the application. Hence we believe that a proper adaptive behavior of QoS can be achieved in a middleware framework, having data semantics of the application as well as understanding underlying resource management dynamics. We present a novel control-based middleware framework to enhance QoS adaptations by dynamic control and reconfigurations to the internal functionalities of a distributed multimedia application. Based on a strict model defining QoS metrics, and the Task Flow Model describing the application structure, a Task Control Model is developed based on the control theory. The control theory is applied to the design of Adaptation Tasks to devise control policies for changes of resource requests by individual applications, hence providing graceful degradation or upgrade over a shared resource within a required QoS range, or in extreme cases enforcing reconfiguration process beyond the QoS range. Our assumptions here are that (1) a resource reservation/allocation occurs to preserve the minimal QoS requirement of an application, and (2) not all resource capacity is allocated to reserved service. The adaptation happens over the non-reserved shared capacity. Using this approach, we are able to reason about and validate analytically system attributes such as the stability, agility, fairness and equilibrium values of the control actions over shared resources. Our validation is not only analytical, but also experimental. We have developed an experimental clientserver visual tracking system to evaluate our middleware framework. In order to maintain tracking precision in a specified range, the middleware framework controls the tracking application according to the current system dynamics. We show that our framework improves the adaptive abilities of the application, while the adaptive actions are stable, flexible and configurable according to the needs of individual applications. Furthermore, we show a quantitative improvement of resource utilization in comparison to over-allocation approach when QoS guarantees in range are required.
In REF , a middleware control framework was proposed, using the dynamic control of the internal parameters and re-organization of functions to enhance QoS adaptation decisions effectively.
2113347
A Control-Based Middleware Framework for Quality of Service Adaptations
{ "venue": "IEEE Journal on Selected Areas in Communications", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
Abstract-"In-hand manipulation" is the ability to reposition an object in the hand, for example when adjusting the grasp of a hammer before hammering a nail. The common approach to in-hand manipulation with robotic hands, known as dexterous manipulation [1] , is to hold an object within the fingertips of the hand and wiggle the fingers, or walk them along the object's surface. Dexterous manipulation, however, is just one of the many techniques available to the robot. The robot can also roll the object in the hand by using gravity, or adjust the object's pose by pressing it against a surface, or if fast enough, it can even toss the object in the air and catch it in a different pose. All these techniques have one thing in common: they rely on resources extrinsic to the hand, either gravity, external contacts or dynamic arm motions. We refer to them as "extrinsic dexterity". In this paper we study extrinsic dexterity in the context of regrasp operations, for example when switching from a power to a precision grasp, and we demonstrate that even simple grippers are capable of ample in-hand manipulation. We develop twelve regrasp actions, all open-loop and handscripted, and evaluate their effectiveness with over 1200 trials of regrasps and sequences of regrasps, for three different objects (see video [2] ). The long-term goal of this work is to develop a general repertoire of these behaviors, and to understand how such a repertoire might eventually constitute a general-purpose in-hand manipulation capability.
At present, there are two main approaches for in-hand manipulation, depending on whether the approach rely on power extrinsic to the hand or not REF .
1175946
Extrinsic dexterity: In-hand manipulation with external forces
{ "venue": "2014 IEEE International Conference on Robotics and Automation (ICRA)", "journal": "2014 IEEE International Conference on Robotics and Automation (ICRA)", "mag_field_of_study": [ "Computer Science", "Engineering" ] }
Abstract-State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet [1] and Fast R-CNN [2] have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position. The RPN is trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. We further merge RPN and Fast R-CNN into a single network by sharing their convolutional features-using the recently popular terminology of neural networks with 'attention' mechanisms, the RPN component tells the unified network where to look. For the very deep VGG-16 model [3], our detection system has a frame rate of 5 fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007, and MS COCO datasets with only 300 proposals per image. In ILSVRC and COCO 2015 competitions, Faster R-CNN and RPN are the foundations of the 1st-place winning entries in several tracks. Code has been made publicly available. Region proposal methods typically rely on inexpensive features and economical inference schemes. Selective Search [4] , one of the most popular methods, greedily merges superpixels based on engineered low-level features. Yet when compared to efficient detection networks [2], Selective Search is an order of magnitude slower, at 2 seconds per image in a CPU implementation. EdgeBoxes [6] currently provides the best tradeoff between proposal quality and speed, at 0.2 seconds per image. Nevertheless, the region proposal step still consumes as much running time as the detection network. One may note that fast region-based CNNs take advantage of GPUs, while the region proposal methods used in research are implemented on the CPU, making such runtime comparisons inequitable. An obvious way to accelerate proposal computation is to re-implement it for the GPU. This may be an effective engineering solution, but re-implementation ignores the down-stream detection network and therefore misses important opportunities for sharing computation. In this paper, we show that an algorithmic change-computing proposals with a deep convolutional neural network-leads to an elegant and effective solution where proposal computation is nearly cost-free given the detection network's computation. To this end, we introduce novel Region Proposal Networks (RPNs) that share convolutional layers with state-of-the-art object detection networks [1], [2] . By sharing convolutions at test-time, the marginal cost for computing proposals is small (e.g., 10 ms per image). Our observation is that the convolutional feature maps used by region-based detectors, like Fast R-CNN, can also be used for generating region proposals. On top of these convolutional features, we construct an RPN by adding a few additional convolutional layers that simultaneously regress region bounds and objectness scores at each location on a regular grid. The RPN is thus a kind of fully convolutional network (FCN) [7] and can be trained end-to-end specifically for the task for generating detection proposals. RPNs are designed to efficiently predict region proposals with a wide range of scales and aspect ratios. In contrast to prevalent methods [1], [2], [8], [9] that use pyramids of images (Fig. 1a) or pyramids of filters (Fig. 1b) , we introduce novel "anchor" boxes that serve as references at multiple scales and aspect ratios. Our scheme can be thought of as a pyramid of regression references (Fig. 1c) , which avoids enumerating images or filters of multiple scales or aspect ratios. This model performs well when trained and tested using single-scale images and thus benefits running speed. To unify RPNs with Fast R-CNN [2] object detection networks, we propose a training scheme that alternates S. Ren is with
Many pedestrian detectors employ a region proposal network RPN REF as a main part of the detector to improve the detection accuracy.
10328909
Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks
{ "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "mag_field_of_study": [ "Computer Science", "Medicine" ] }
New smartphone technologies for the first time provide a platform for a new type of on-person, public health data collection and also a new type of informational public health intervention. In such interventions, it is the device via automatically collecting data relevant to the individual's health that triggers the receipt of an informational public health intervention relevant to that individual. This will enable far more targeted and personalized public health interventions than previously possible. However, furthermore, sensorbased public health data collection, combined with such informational public health interventions provides the underlying platform for a novel and powerful new form of learning public health system. In this paper we provide an architecture for such a sensor-based learning public health system, in particular one which maintains the anonymity of its individual participants, we describe its algorithm for iterative public health intervention improvement, and examine and provide an evaluation of its anonymity maintaining characteristics.
Our previous work has reported a particular form of learning health system, a learning public health system, that works by distributing informational public health interventions to individual"s smartphones, measuring the effect of the public health interventions and incrementally making improvements to the public health interventions and thereby making improvements to public health REF .
34860190
A Sensor-based Learning Public Health System
{ "venue": "HICSS", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
Abstract-In long term evolution-advanced (LTE-A) networks, the carrier aggregation technique is incorporated for user equipments (UEs) to simultaneously aggregate multiple component carriers (CCs) for achieving higher transmission rate. Many research works for LTE-A systems with carrier aggregation configuration have concentrated on the radio resource management problem for downlink transmission, including mainly CC assignment and packet scheduling. Most previous studies have not considered that the assigned CCs in each UE can be changed. Furthermore, they also have not considered the modulation and coding scheme constraint, as specified in LTE-A standards. Therefore, their proposed schemes may limit the radio resource usage and are not compatible with LTE-A systems. In this paper, we assume that the scheduler can reassign CCs to each UE at each transmission time interval and formulate the downlink radio resource scheduling problem under the modulation and coding scheme constraint, which is proved to be NP-hard. Then, a novel greedy-based scheme is proposed to maximize the system throughput while maintaining proportional fairness of radio resource allocation among all UEs. We show that this scheme can guarantee at least half of the performance of the optimal solution. Simulation results show that our proposed scheme outperforms the schemes in previous studies.
Similarly, the authors in REF have addressed a downlink resource scheduling problem that takes also into account the MCS constraint for LTE, with a greedy-based algorithm to maximize the system throughput.
5515342
An Efficient Downlink Radio Resource Allocation with Carrier Aggregation in LTE-Advanced Networks
{ "venue": "IEEE Transactions on Mobile Computing", "journal": "IEEE Transactions on Mobile Computing", "mag_field_of_study": [ "Computer Science" ] }
When an initial failure of nodes occurs in interdependent networks, a cascade of failure between the networks occurs. Earlier studies focused on random initial failures. Here we study the robustness of interdependent networks under targeted attack on high or low degree nodes. We introduce a general technique and show that the targeted-attack problem in interdependent networks can be mapped to the random-attack problem in a transformed pair of interdependent networks. We find that when the highly connected nodes are protected and have lower probability to fail, in contrast to single scale free (SF) networks where the percolation threshold p c = 0, coupled SF networks are significantly more vulnerable with p c significantly larger than zero. The result implies that interdependent networks are difficult to defend by strategies such as protecting the high degree nodes that have been found useful to significantly improve robustness of single networks.
Targeted attacks on interdependent networks has been studied in REF .
12763939
Robustness of interdependent networks under targeted attack
{ "venue": "ArXiv", "journal": "ArXiv", "mag_field_of_study": [ "Physics", "Computer Science", "Medicine" ] }
nderstanding what motivates participation is a central theme in the research on open source software (OSS) development. Our study contributes by revealing how the different motivations of OSS developers are interrelated, how these motivations influence participation leading to performance, and how past performance influences subsequent motivations. Drawing on theories of intrinsic and extrinsic motivation, we develop a theoretical model relating the motivations, participation, and performance of OSS developers. We evaluate our model using survey and archival data collected from a longitudinal field study of software developers in the Apache projects. Our results reveal several important findings. First, we find that developers' motivations are not independent but rather are related in complex ways. Being paid to contribute to Apache projects is positively related to developers' status motivations but negatively related to their use-value motivations. Perhaps surprisingly, we find no evidence of diminished intrinsic motivation in the presence of extrinsic motivations; rather, status motivations enhance intrinsic motivations. Second, we find that different motivations have an impact on participation in different ways. Developers' paid participation and status motivations lead to aboveaverage contribution levels, but use-value motivations lead to below-average contribution levels, and intrinsic motivations do not significantly impact average contribution levels. Third, we find that developers' contribution levels positively impact their performance rankings. Finally, our results suggest that past-performance rankings enhance developers' subsequent status motivations.
REF conducted a study which revealed how the different motivations of open-source developers were interrelated, how these motivations influenced participation and how past performance influenced subsequent motivations.
9916105
Understanding the motivations, participation, and performance of open-source software developers: A longitudinal study of the apache projects
{ "venue": "Management Science", "journal": null, "mag_field_of_study": [ "Economics", "Computer Science" ] }
In Wireless Sensor Networks (WSNs), routing data towards the sink leads to unbalanced energy consumption among intermediate nodes resulting in high data loss rate. The use of multiple Mobile Data Collectors (MDCs) has been proposed in the literature to mitigate such problems. MDCs help to achieve uniform energy-consumption across the network, fill coverage gaps, and reduce end-to-end communication delays, amongst others. However, mechanisms to support MDCs such as location advertisement and route maintenance introduce significant overhead in terms of energy consumption and packet delays. In this paper, we propose a self-organizing and adaptive Dynamic Clustering (DCMDC) solution to maintain MDCrelay networks. This solution is based on dividing the network into well-delimited clusters called Service Zones (SZs). Localizing mobility management traffic to a SZ reduces signaling overhead, route setup delay and bandwidth utilization. Network clustering also helps to achieve scalability and load balancing. Smaller network clusters make buffer overflows and energy depletion less of a problem. These performance gains are expected to support achieving higher information completeness and availability as well as maximizing the network lifetime. Moreover, maintaining continuous connectivity between the MDC and sensor nodes increases information availability and validity. Performance experiments show that DCMDC outperforms its rival in the literature. Besides the improved quality of information, the proposed approach improves the packet delivery ratio by up to 10%, end-to-end delay by up to 15%, energy consumption by up to 53%, energy balancing by up to 51%, and prolongs the network lifetime by up to 53%.
In REF , a self-organizing and adaptive clustering solution was developed to improve the sensor network performance such a delivery ratio, delay, and lifetime.
40616534
Dynamic clustering and management of mobile wireless sensor networks
{ "venue": "Comput. Networks", "journal": "Comput. Networks", "mag_field_of_study": [ "Computer Science" ] }
Abstract-Mobile ad hoc networking has been an active research area for several years. How to stimulate cooperation among selfish mobile nodes, however, is not well addressed yet. In this paper, we propose Sprite, a simple, cheat-proof, creditbased system for stimulating cooperation among selfish nodes in mobile ad hoc networks. Our system provides incentive for mobile nodes to cooperate and report actions honestly. Compared with previous approaches, our system does not require any tamperproof hardware at any node. Furthermore, we present a formal model of our system and prove its properties. Evaluations of a prototype implementation show that the overhead of our system is small. Simulations and analysis show that mobile nodes can cooperate and forward each other's messages, unless the resource of each node is extremely low.
In REF , a cheat-proof, credit based system was developed for stimulating cooperation among selfish nodes in mobile ad hoc networks.
3103973
Sprite: a simple, cheat-proof, credit-based system for mobile ad-hoc networks
{ "venue": "IEEE INFOCOM 2003. Twenty-second Annual Joint Conference of the IEEE Computer and Communications Societies (IEEE Cat. No.03CH37428)", "journal": "IEEE INFOCOM 2003. Twenty-second Annual Joint Conference of the IEEE Computer and Communications Societies (IEEE Cat. No.03CH37428)", "mag_field_of_study": [ "Computer Science" ] }
Abstract: Physical activity recognition using embedded sensors has enabled many context-aware applications in different areas, such as healthcare. Initially, one or more dedicated wearable sensors were used for such applications. However, recently, many researchers started using mobile phones for this purpose, since these ubiquitous devices are equipped with various sensors, ranging from accelerometers to magnetic field sensors. In most of the current studies, sensor data collected for activity recognition are analyzed offline using machine learning tools. However, there is now a trend towards implementing activity recognition systems on these devices in an online manner, since modern mobile phones have become more powerful in terms of available resources, such as CPU, memory and battery. The research on offline activity recognition has been reviewed in several earlier studies in detail. However, work done on online activity recognition is still in its infancy and is yet to be reviewed. In this paper, we review the studies done so far that implement activity recognition systems on mobile phones and use only their on-board sensors. We discuss various aspects of these studies. Moreover, we discuss their limitations and present various recommendations for future research.
Shoaib et al. REF present a review of the works using recognition systems on smartphones based on their on-board sensors.
5724477
A Survey of Online Activity Recognition Using Mobile Phones
{ "venue": "Sensors (Basel, Switzerland)", "journal": "Sensors (Basel, Switzerland)", "mag_field_of_study": [ "Engineering", "Medicine", "Computer Science" ] }
Intelligence is one of the most important aspects in the development of our future communities. Ranging from smart home to smart building to smart city, all these smart infrastructures must be supported by intelligent power supply. Smart grid is proposed to solve all challenges of future electricity supply. In smart grid, in order to realize optimal scheduling, an SM is installed at each home to collect the near-real-time electricity consumption data, which can be used by the utilities to offer better smart home services. However, the near-real-time data may disclose a user's private information. An adversary may track the application usage patterns by analyzing the user's electricity consumption profile. In this article, we propose a privacy-preserving and efficient data aggregation scheme. We divide users into different groups, and each group has a private blockchain to record its members' data. To preserve the inner privacy within a group, we use pseudonyms to hide users' identities, and each user may create multiple pseudonyms and associate his/ her data with different pseudonyms. In addition, the bloom filter is adopted for fast authentication. The analysis shows that the proposed scheme can meet the security requirements and achieve better performance than other popular methods.
To avoid connecting the pseudonym and a user by matching the energy consumption and the user's behaviours, in REF , each user generates multiple pseudonyms and submits his power consumption data under different pseudonyms.
46931333
Privacy-Preserving and Efficient Aggregation Based on Blockchain for Power Grid Communications in Smart Communities
{ "venue": "IEEE Communications Magazine", "journal": "IEEE Communications Magazine", "mag_field_of_study": [ "Computer Science" ] }
Abstract-This paper studies the problem of stochastic dynamic pricing and energy management policy for electric vehicle (EV) charging service providers. In the presence of renewable energy integration and energy storage system, EV charging service providers must deal with multiple uncertainties -charging demand volatility, inherent intermittency of renewable energy generation, and wholesale electricity price fluctuation. The motivation behind our work is to offer guidelines for charging service providers to determine proper charging prices and manage electricity to balance the competing objectives of improving profitability, enhancing customer satisfaction, and reducing impact on power grid in spite of these uncertainties. We propose a new metric to assess the impact on power grid without solving complete power flow equations. To protect service providers from severe financial losses, a safeguard of profit is incorporated in the model. Two algorithms -stochastic dynamic programming (SDP) algorithm and greedy algorithm (benchmark algorithm) -are applied to derive the pricing and electricity procurement policy. A Pareto front of the multiobjective optimization is derived. Simulation results show that using SDP algorithm can achieve up to 7% profit gain over using greedy algorithm. Additionally, we observe that the charging service provider is able to reshape spatial-temporal charging demands to reduce the impact on power grid via pricing signals.
Luo et al. REF studied the problem of stochastic dynamic pricing and energy management policy for the PEV charging service providers in the presence of the energy storage system and multiple uncertainty sources.
55173196
Stochastic Dynamic Pricing for EV Charging Stations with Renewable Energy Integration and Energy Storage
null
Abstract-In defending against various network attacks, such as distributed denial-of-service (DDoS) attacks or worm attacks, a defense system needs to deal with various network conditions and dynamically changing attacks. Therefore, a good defense system needs to have a built-in "adaptive defense" functionality based on cost minimization-adaptively adjusting its configurations according to the network condition and attack severity in order to minimize the combined cost introduced by false positives (misidentify normal traffic as attack) and false negatives (misidentify attack traffic as normal) at any time. In this way, the adaptive defense system can generate fewer false alarms in normal situations or under light attacks with relaxed defense configurations, while protecting a network or a server more vigorously under severe attacks. In this paper, we present concrete adaptive defense system designs for defending against two major network attacks: SYN flood DDoS attack and Internet worm infection. The adaptive defense is a high-level system design that can be built on various underlying nonadaptive detection and filtering algorithms, which makes it applicable for a wide range of security defenses. Index Terms-Adaptive defense, computer security, distributed denial-of-service (DDoS), Internet worm, SYN flood.
Also for a different purpose, Zou et al. REF ] explored the concept of "adaptive defense" based on cost optimization, where cost was introduced by false positives and false negatives.
2306296
Adaptive Defense Against Various Network Attacks
{ "venue": "IEEE Journal on Selected Areas in Communications", "journal": "IEEE Journal on Selected Areas in Communications", "mag_field_of_study": [ "Computer Science" ] }
While depth tends to improve network performances, it also makes gradient-based training more difficult since deeper networks tend to be more non-linear. The recently proposed knowledge distillation approach is aimed at obtaining small and fast-to-execute models, and it has shown that a student network could imitate the soft output of a larger teacher network or ensemble of networks. In this paper, we extend this idea to allow the training of a student that is deeper and thinner than the teacher, using not only the outputs but also the intermediate representations learned by the teacher as hints to improve the training process and final performance of the student. Because the student intermediate hidden layer will generally be smaller than the teacher's intermediate hidden layer, additional parameters are introduced to map the student hidden layer to the prediction of the teacher hidden layer. This allows one to train deeper students that can generalize better or run faster, a trade-off that is controlled by the chosen student capacity. For example, on CIFAR-10, a deep student network with almost 10.4 times less parameters outperforms a larger, state-of-the-art teacher network.
This technique, known as Knowledge distillation was extended by Romero et al. Their FitNets REF use the knowledge distillation technique to train a thin but deep student network using not only the outputs but also the intermediate representations of the teacher network.
2723173
FitNets: Hints for Thin Deep Nets
{ "venue": "ICLR 2015", "journal": "arXiv: Learning", "mag_field_of_study": [ "Computer Science", "Mathematics" ] }
We present an autoencoder that leverages learned representations to better measure similarities in data space. By combining a variational autoencoder with a generative adversarial network we can use learned feature representations in the GAN discriminator as basis for the VAE reconstruction objective. Thereby, we replace element-wise errors with feature-wise errors to better capture the data distribution while offering invariance towards e.g. translation. We apply our method to images of faces and show that it outperforms VAEs with element-wise similarity measures in terms of visual fidelity. Moreover, we show that the method learns an embedding in which high-level abstract visual features (e.g. wearing glasses) can be modified using simple arithmetic.
Larsen et al. REF presented an autoencoder that leverages learned representations to better measure similarities in data space.
8785311
Autoencoding beyond pixels using a learned similarity metric
{ "venue": "ArXiv", "journal": "ArXiv", "mag_field_of_study": [ "Computer Science", "Mathematics" ] }
Deep neural networks require large amounts of resources which makes them hard to use on resource constrained devices such as Internet-of-things devices. Offloading the computations to the cloud can circumvent these constraints but introduces a privacy risk since the operator of the cloud is not necessarily trustworthy. We propose a technique that obfuscates the data before sending it to the remote computation node. The obfuscated data is unintelligible for a human eavesdropper but can still be classified with a high accuracy by a neural network trained on unobfuscated images.
Leroux et al. REF use an autoencoder to obfuscate the data before sending it to the cloud, but the obfuscation they use is readily reversible, as they state.
44097546
Privacy Aware Offloading of Deep Neural Networks
{ "venue": "ArXiv", "journal": "ArXiv", "mag_field_of_study": [ "Computer Science", "Mathematics" ] }
: Resizing a clock model (267 connected components): Standard non-uniform scale distorts the shape of parts of the model, e.g. the dial (b). Our approach resizes the clock in a more natural manner protecting its shape (c). (d) and (e) show part of the protective grid before and after resizing. Resizing of 3D models can be very useful when creating new models or placing models inside different scenes. However, uniform scaling is limited in its applicability while straightforward nonuniform scaling can destroy features and lead to serious visual artifacts. Our goal is to define a method that protects model features and structures during resizing. We observe that typically, during scaling some parts of the models are more vulnerable than others, undergoing undesirable deformation. We automatically detect vulnerable regions and carry this information to a protective grid defined around the object, defining a vulnerability map. The 3D model is then resized by a space-deformation technique which scales the grid non-homogeneously while respecting this map. Using space-deformation allows processing of common models of man-made objects that consist of multiple components and contain non-manifold structures. We show that our technique resizes models while suppressing undesirable distortion, creating models that preserve the structure and features of the original ones.
REF present a method for shape-aware resizing of man-made objects that can be non-uniformly scaled along three main axes.
14817192
Non-homogeneous resizing of complex models
{ "venue": "SIGGRAPH Asia '08", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
Descriptive names are a vital part of readable, and hence maintainable, code. Recent progress on automatically suggesting names for local variables tantalizes with the prospect of replicating that success with method and class names. However, suggesting names for methods and classes is much more difficult. This is because good method and class names need to be functionally descriptive, but suggesting such names requires that the model goes beyond local context. We introduce a neural probabilistic language model for source code that is specifically designed for the method naming problem. Our model learns which names are semantically similar by assigning them to locations, called embeddings, in a high-dimensional continuous space, in such a way that names with similar embeddings tend to be used in similar contexts. These embeddings seem to contain semantic information about tokens, even though they are learned only from statistical co-occurrences of tokens. Furthermore, we introduce a variant of our model that is, to our knowledge, the first that can propose neologisms, names that have not appeared in the training corpus. We obtain state of the art results on the method, class, and even the simpler variable naming tasks. More broadly, the continuous embeddings that are learned by our model have the potential for wide application within software engineering.
Allamanis et al. uses a log-bilinear neural network to understand the context of a method or class and recommends a representative name that has not appeared in the training corpus REF .
9279336
Suggesting accurate method and class names
{ "venue": "ESEC/FSE 2015", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
Abstract-In location-based services, users with location-aware mobile devices are able to make queries about their surroundings anywhere and at any time. While this ubiquitous computing paradigm brings great convenience for information access, it also raises concerns over potential intrusion into user location privacy. To protect location privacy, one typical approach is to cloak user locations into spatial regions based on user-specified privacy requirements, and to transform location-based queries into region-based queries. In this paper, we identify and address three new issues concerning this location cloaking approach. First, we study the representation of cloaking regions and show that a circular region generally leads to a small result size for region-based queries. Second, we develop a mobility-aware location cloaking technique to resist trace analysis attacks. Two cloaking algorithms, namely MaxAccu_Cloak and MinComm_Cloak, are designed based on different performance objectives. Finally, we develop an efficient polynomial algorithm for evaluating circular-region-based kNN queries. Two query processing modes, namely bulk and progressive, are presented to return query results either all at once or in an incremental manner. Experimental results show that our proposed mobility-aware cloaking algorithms significantly improve the quality of location cloaking in terms of an entropy measure without compromising much on query latency or communication cost. Moreover, the progressive query processing mode achieves a shorter response time than the bulk mode by parallelizing the query evaluation and result transmission.
Xu et al. REF developed an efficient algorithm for evaluating circular-region-based kNN queries that applies a filter method based on a distance measure to prune out POIs effectively.
13929362
Privacy-Conscious Location-Based Queries in Mobile Environments
{ "venue": "IEEE Transactions on Parallel and Distributed Systems", "journal": "IEEE Transactions on Parallel and Distributed Systems", "mag_field_of_study": [ "Computer Science" ] }
One of the most important obstacles to deploying predictive models is the fact that humans do not understand and trust them. Knowing which variables are important in a model's prediction and how they are combined can be very powerful in helping people understand and trust automatic decision making systems. Here we propose interpretable decision sets, a framework for building predictive models that are highly accurate, yet also highly interpretable. Decision sets are sets of independent if-then rules. Because each rule can be applied independently, decision sets are simple, concise, and easily interpretable. We formalize decision set learning through an objective function that simultaneously optimizes accuracy and interpretability of the rules. In particular, our approach learns short, accurate, and non-overlapping rules that cover the whole feature space and pay attention to small but important classes. Moreover, we prove that our objective is a nonmonotone submodular function, which we efficiently optimize to find a near-optimal set of rules. Experiments show that interpretable decision sets are as accurate at classification as state-of-the-art machine learning techniques. They are also three times smaller on average than rule-based models learned by other methods. Finally, results of a user study show that people are able to answer multiple-choice questions about the decision boundaries of interpretable decision sets and write descriptions of classes based on them faster and more accurately than with other rule-based models that were designed for interpretability. Overall, our framework provides a new approach to interpretable machine learning that balances accuracy, interpretability, and computational efficiency.
Lakkaraju et al. formalizes decision set learning which can generate short, succinct and non-overlapping rules for classi cation tasks REF .
12533380
Interpretable Decision Sets: A Joint Framework for Description and Prediction
{ "venue": "KDD '16", "journal": null, "mag_field_of_study": [ "Computer Science", "Medicine" ] }
Sensors may fail due to various reasons such as heat, malicious activity, environmental hazards, extended use, and lack of power. As more and more sensors fail, certain desired properties such as barrier coverage will diminish and eventually fall below a desired level. In such a case, the network will have to be repaired. It is therefore desirable to have mechanisms to monitor network properties. In this paper, we are interested in measuring the quality of barrier coverage. In the literature, researchers only consider whether or not a sensor network provides barrier coverage. This is equivalent to measuring its quality as either 0 or 1. We believe quality of barrier coverage is not binary and propose a metric for measuring it. If the measured quality is short of a desired value, we further identify all local regions that need to be repaired. The identified regions are minimum in the sense that if one of them is not repaired then the resulting network will still be short of quality. We also discuss how to actually repair a region.
In REF , Chen, Lai, and Xuan studied how to measure and ensure the quality of barrier coverage in wireless sensor networks.
738774
Measuring and guaranteeing quality of barrier-coverage in wireless sensor networks
{ "venue": "MobiHoc '08", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
Generative adversarial nets (GANs) are good at generating realistic images and have been extended for semi-supervised classification. However, under a two-player formulation, existing work shares competing roles of identifying fake samples and predicting labels via a single discriminator network, which can lead to undesirable incompatibility. We present triple generative adversarial net (Triple-GAN), a flexible game-theoretical framework for classification and class-conditional generation in semisupervised learning. Triple-GAN consists of three players-a generator, a discriminator and a classifier, where the generator and classifier characterize the conditional distributions between images and labels, and the discriminator solely focuses on identifying fake image-label pairs. With designed utilities, the distributions characterized by the classifier and generator both concentrate to the data distribution under nonparametric assumptions. We further propose unbiased regularization terms to make the classifier and generator strongly coupled and some biased techniques to boost the performance of Triple-GAN in practice. Our results on several datasets demonstrate the promise in semi-supervised learning, where Triple-GAN achieves comparable or superior performance than state-of-the-art classification results among DGMs; it is also able to disentangle the classes and styles and transfer smoothly on the data level via interpolation on the latent space class-conditionally.
In addition to the generator and discriminator Li et al. REF use a classifier to achieve a controllable generator.
17579179
Triple Generative Adversarial Nets
{ "venue": "ArXiv", "journal": "ArXiv", "mag_field_of_study": [ "Computer Science", "Mathematics" ] }
Nearly all science and engineering fields use search algorithms, which automatically explore a search space to find high-performing solutions: chemists search through the space of molecules to discover new drugs; engineers search for stronger, cheaper, safer designs, scientists search for models that best explain data, etc. The goal of search algorithms has traditionally been to return the single highestperforming solution in a search space. Here we describe a new, fundamentally different type of algorithm that is more useful because it provides a holistic view of how highperforming solutions are distributed throughout a search space. It creates a map of high-performing solutions at each point in a space defined by dimensions of variation that a user gets to choose. This Multi-dimensional Archive of Phenotypic Elites (MAP-Elites) algorithm illuminates search spaces, allowing researchers to understand how interesting attributes of solutions combine to affect performance, either positively or, equally of interest, negatively. For example, a drug company may wish to understand how performance changes as the size of molecules and their costto-produce vary. MAP-Elites produces a large diversity of high-performing, yet qualitatively different solutions, which can be more helpful than a single, high-performing solution. Interestingly, because MAP-Elites explores more of the search space, it also tends to find a better overall solution than state-of-the-art search algorithms. We demonstrate the benefits of this new algorithm in three different problem domains ranging from producing modular neural networks to designing simulated and real soft robots. Because MAPElites (1) illuminates the relationship between performance and dimensions of interest in solutions, (2) returns a set of high-performing, yet diverse solutions, and (3) improves the state-of-the-art for finding a single, best solution, it will catalyze advances throughout all science and engineering fields. Author's Note: This paper is a preliminary draft of a paper that introduces the MAP-Elites algorithm and explores its capabilities. Normally we would not post such an early draft with only preliminary experimental data, but many people in the community have heard of MAP-Elites, are using it in their own papers, and have asked us for a paper that describes it so that they can cite it, to help them implement MAP-Elites, and that describes the experiments we have already conducted with it. We thus want to share both the details of this algorithm and what we have learned about it from our preliminary experiments. All of the experiments in this paper will be redone before the final version of the paper is published, and the data are thus subject to change. Every field of science and engineering makes use of search algorithms, also known as optimization algorithms, which seek to automatically find a high-quality solution or set of high-quality solutions amongst a large space of possible solutions 1, 2 . Such algorithms often find solutions that outperform those designed Fig. 1 . The MAP-Elites algorithm searches in a high-dimensional space to find the highest-performing solution at each point in a low-dimensional feature space, where the user gets to choose dimensions of variation of interest that define the low dimensional space. We call this type of algorithm an "illumination algorithm", because it illuminates the fitness potential of each area of the feature space, including tradeoffs between performance and the features of interest. For example, MAP-Elites could search in the space of all possible robot designs (a very high dimensional space) to find the fastest robot (a performance criterion) for each combination of height and weight. by human engineers 3 : they have designed antennas that NASA flew to space 4 , found patentable electronic circuit designs 3 , automated scientific discovery 5 , and created artificial intelligence for robots [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] . Because of their widespread use, improving search algorithms provides substantial benefits for society. Most search algorithms focus on finding one or a small set of high-quality solutions in a search space. What constitutes highquality is determined by the user, who specifies one or a few objectives that the solution should score high on. For example, a user may want solutions that are high-performing and low-cost, where each of those desiderata is quantifiably measured either by an equation or simulator. Traditional search algorithms include hill climbing, simulated annealing, evolutionary algorithms, gradient ascent/descent, Bayesian optimization, and multi-objective optimization algorithms 1, 2 . The latter return a set of solutions that represent the best tradeoffs between objectives 16 . A subset of optimization problems are challenging because they require searching for optima in a function or system that is either non-differentiable or cannot be expressed mathematically, typically because a physical system or a complex simulation is required. Such problems require "black box" optimization algorithms, which search for high-performing solutions armed only with the ability to determine the performance of a solution, but without access to the evaluation function that determines that performance. On such problems, one cannot use optimization methods that require calculating the gradient of the function, such as gradient ascent/descent. A notorious challenge in black box optimization is the presence of local optima (also called local minima) 1, 2 . A problem with most search algorithms of this class is that they try to follow a path that will lead to the best global solution by relying on the heuristic that random changes to good solutions lead to better solutions. This approach does not work for highly deceptive problems, however, because in such problems one has to cross low-performing valleys to find the global optima, or even just to find better optima 2 . Because evolutionary algorithms are one of the most success-
The MAP-Elites algorithm by Mouret and Clune REF combines a performance objective f and a user-defined space of features that describe candidate solutions (which is not required by FFA).
14759751
Illuminating search spaces by mapping elites
{ "venue": "ArXiv", "journal": "ArXiv", "mag_field_of_study": [ "Computer Science", "Biology" ] }
Abstract-This paper studies video transmission using a multihoming service in a heterogeneous wireless access medium. We propose an energy and content aware video transmission framework that incorporates the energy limitation of mobile terminals (MTs) and the quality-of-service (QoS) requirements of video streaming applications, and employs the available opportunities in a heterogeneous wireless access medium. In the proposed framework, the MT determines the transmission power for the utilized radio interfaces, selectively drops some packets under the battery energy limitation, and assigns the most valuable packets to different radio interfaces in order to minimize the video quality distortion. First, the problem is formulated as MINLP which is known to be NP-hard. Then we employ a piecewise linearization approach and solve the problem using a cutting plane method which reduces the associated complexity from MINLP to a series of MIPs. Finally, for practical implementation in MTs, we approximate the video transmission framework using a twostage optimization problem. Numerical results demonstrate that the proposed framework exhibits very close performance to the exact problem solution. In addition, the proposed framework, unlike the existing solutions in literature, offers a choice for desirable trade-off between the achieved video quality and the MT operational period per battery charging. Index Terms-Multi-homing video transmission, video packet scheduling, heterogeneous wireless access medium, precedenceconstrained multiple knapsack problem (PC-MKP).
In REF , an energy and content-aware framework was proposed for video transmission in heterogeneous network.
15742837
Energy and Content Aware Multi-Homing Video Transmission in Heterogeneous Networks
{ "venue": "IEEE Transactions on Wireless Communications", "journal": "IEEE Transactions on Wireless Communications", "mag_field_of_study": [ "Computer Science" ] }
Abstract Traditionally, a bottleneck preventing the development of more intelligent systems was the limited amount of data available. Nowadays, the total amount of information is almost incalculable and automatic data analyzers are even more needed. However, the limiting factor is the inability of learning algorithms to use all the data to learn within a reasonable time. In order to handle this problem, a new field in machine learning has emerged: large-scale learning. In this context, distributed learning seems to be a promising line of research since allocating the learning process among several workstations is a natural way of scaling up learning algorithms. Moreover, it allows to deal with data sets that are naturally distributed, a frequent situation in many real applications. This study provides some background regarding the advantages of distributed environments as well as an overview of distributed learning for dealing with "very large" data sets.
Two surveys REF [226] provide a general introduction of distributed machine learning algorithms for dealing with big data.
18503837
A survey of methods for distributed machine learning
{ "venue": "Progress in Artificial Intelligence", "journal": "Progress in Artificial Intelligence", "mag_field_of_study": [ "Computer Science" ] }
Extracting useful information from high-dimensional data is an important focus of today's statistical research and practice. Penalized loss function minimization has been shown to be effective for this task both theoretically and empirically. With the virtues of both regularization and sparsity, the L1-penalized squared error minimization method Lasso has been popular in regression models and beyond. In this paper, we combine different norms including L1 to form an intelligent penalty in order to add side information to the fitting of a regression or classification model to obtain reasonable estimates. Specifically, we introduce the Composite Absolute Penalties (CAP) family, which allows given grouping and hierarchical relationships between the predictors to be expressed. CAP penalties are built by defining groups and combining the properties of norm penalties at the across-group and within-group levels. Grouped selection occurs for nonoverlapping groups. Hierarchical variable selection is reached by defining groups with particular overlapping patterns. We propose using the BLASSO and cross-validation to compute CAP estimates in general. For a subfamily of CAP estimates involving only the L1 and L∞ norms, we introduce the iCAP algorithm to trace the entire regularization path for the grouped selection problem. Within this subfamily, unbiased estimates of the degrees of freedom (df) are derived so that the regularization parameter is selected without crossvalidation. CAP is shown to improve on the predictive performance of the LASSO in a series of simulated experiments, including cases with p ≫ n and possibly mis-specified groupings. When the complexity of a model is properly calculated, iCAP is seen to be parsimonious in the experiments.
In REF , the composite absolute penalties (CAP) family was introduced, and an algorithm called iCAP was developed.
9319285
The composite absolute penalties family for grouped and hierarchical variable selection
{ "venue": "Annals of Statistics 2009, Vol. 37, No. 6A, 3468-3497", "journal": null, "mag_field_of_study": [ "Mathematics" ] }