src
stringlengths
100
132k
tgt
stringlengths
10
710
paper_id
stringlengths
3
9
title
stringlengths
9
254
discipline
dict
Skeleton based action recognition distinguishes human actions using the trajectories of skeleton joints, which provide a very good representation for describing actions. Considering that recurrent neural networks (RNNs) with Long Short-Term Memory (LSTM) can learn feature representations and model long-term temporal dependencies automatically, we propose an endto-end fully connected deep LSTM network for skeleton based action recognition. Inspired by the observation that the co-occurrences of the joints intrinsically characterize human actions, we take the skeleton as the input at each time slot and introduce a novel regularization scheme to learn the co-occurrence features of skeleton joints. To train the deep LSTM network effectively, we propose a new dropout algorithm which simultaneously operates on the gates, cells, and output responses of the LSTM neurons. Experimental results on three human action recognition datasets consistently demonstrate the effectiveness of the proposed model.
REF proposes a spatiotemporal LSTM network that learns the co-occurrence features of skeleton joints with a group sparse regularization.
8172563
Co-occurrence Feature Learning for Skeleton based Action Recognition using Regularized Deep LSTM Networks
{ "venue": "ArXiv", "journal": "ArXiv", "mag_field_of_study": [ "Computer Science" ] }
Abstract. In 2012, Tseng and Tsai presented a novel revocable ID (identity)-based public key setting that provides an efficient revocation mechanism with a public channel to revoke misbehaving or compromised users from public key systems. Subsequently, based on Tseng and Tsai's revocable ID-based public key setting, Tsai et al. proposed a new revocable ID-based signature (RIBS) scheme in the standard model (without random oracles). However, their RIBS scheme possesses only existential unforgeability under adaptive chosen-message attacks. In the article, we propose the first strongly secure RIBS scheme without random oracles under the computational Diffie-Hellman and collision resistant assumptions. Comparisons with previously proposed schemes are made to demonstrate the advantages of our scheme in terms of revocable functionality and security property.
Based on their scheme, Hung et al. REF proposed another RIBS scheme with improved security.
13335585
Strongly secure revocable ID-based signature without random oracles
{ "venue": "ITC", "journal": "ITC", "mag_field_of_study": [ "Computer Science" ] }
NTRUEncrypt is a fast and practical lattice-based public-key encryption scheme, which has been standardized by IEEE, but until recently, its security analysis relied only on heuristic arguments, which limited the confidence in its security. Recently, this situation has changed, when Stehlé and Steinfeld showed that a slight variant (that we call pNE) could be proven to be secure under chosen-plaintext attack (IND-CPA), assuming the hardness of worst-case problems in ideal lattices. However, for general purpose applications, it is widely accepted that an encryption scheme should satisfy the stronger notion of security under chosen-ciphertext attack (IND-CCA2), and the pNE scheme is insecure in this model. To fill this gap, we present a variant of pNE called NTRUCCA, that is IND-CCA2 secure in the standard model assuming the hardness of worst-case problems in ideal lattices, and only incurs a constant factor overhead in ciphertext and key length over the pNE scheme. To our knowledge, our result gives the first IND-CCA2 secure variant of NTRUEncrypt in the standard model, based on standard cryptographic assumptions. As an intermediate step, we present a construction for an All-But-One (ABO) lossy trapdoor function from pNE, which may be of independent interest. Our scheme uses the lossy trapdoor function framework of Peikert and Waters, which we generalize to the case of (k − 1)-of-k-correlated input distributions.
Recently, Steinfeld et al. REF introduced the first CCA2 secure variant of the NTRU in the standard model with a provable security from worst-case problems in ideal lattices.
12021593
NTRUCCA: How to Strengthen NTRUEncrypt to Chosen-Ciphertext Security in the Standard Model
{ "venue": "Public Key Cryptography", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
We present a low cost method to measure and characterize the end-to-end latency when using a touch system (tap latency) or an input device equipped with a physical button. Our method relies on a vibration sensor attached to a finger and a photo-diode to detect the screen response. Both are connected to a micro-controller connected to a host computer using a low-latency USB communication protocol in order to combine software and hardware probes to help determine where the latency comes from. We present the operating principle of our method before investigating the main sources of latency in several systems. We show that most of the latency originates from the display side. Our method can help application designers characterize and troubleshoot latency on a wide range of interactive systems.
Furthermore, they developed a low cost method to measure and characterize the end-to-end latency of a touch system (tap latency) or an input device equipped with a physical button REF .
11827802
Characterizing Latency in Touch and Button-Equipped Interactive Systems
{ "venue": "UIST '17", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
The recently developed variational autoencoders (VAEs) have proved to be an effective confluence of the rich representational power of neural networks with Bayesian methods. However, most work on VAEs use a rather simple prior over the latent variables such as standard normal distribution, thereby restricting its applications to relatively simple phenomena. In this work, we propose hierarchical nonparametric variational autoencoders, which combines treestructured Bayesian nonparametric priors with VAEs, to enable infinite flexibility of the latent representation space. Both the neural parameters and Bayesian priors are learned jointly using tailored variational inference. The resulting model induces a hierarchical structure of latent semantic concepts underlying the data corpus, and infers accurate representations of data instances. We apply our model in video representation learning. Our method is able to discover highly interpretable activity hierarchies, and obtain improved clustering accuracy and generalization capacity based on the learned rich representations.
An exception is hierarchical nonparametric variational autoencoders proposed in REF .
2042076
Nonparametric Variational Auto-Encoders for Hierarchical Representation Learning
{ "venue": "2017 IEEE International Conference on Computer Vision (ICCV)", "journal": "2017 IEEE International Conference on Computer Vision (ICCV)", "mag_field_of_study": [ "Mathematics", "Computer Science" ] }
In this paper we address issues related to building a large-scale Chinese corpus. We try to answer four questions: (i) how to speed up annotation, (ii) how to maintain high annotation quality, (iii) for what purposes is the corpus applicable, and finally (iv) what future work we anticipate.
REF ) also address some issues related to building a large-scale Chinese corpus.
6785675
Building A Large-Scale Annotated Chinese Corpus
{ "venue": "International Conference On Computational Linguistics", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
The analysis of brain connectivity is a vast field in neuroscience with a frequent use of visual representations and an increasing need for visual analysis tools. Based on an in-depth literature review and interviews with neuroscientists, we explore high-level brain connectivity analysis tasks that need to be supported by dedicated visual analysis tools. A significant example of such a task is the comparison of different connectivity data in the form of weighted graphs. Several approaches have been suggested for graph comparison within information visualization, but the comparison of weighted graphs has not been addressed. We explored the design space of applicable visual representations and present augmented adjacency matrix and node-link visualizations. To assess which representation best support weighted graph comparison tasks, we performed a controlled experiment. Our findings suggest that matrices support these tasks well, outperforming node-link diagrams. These results have significant implications for the design of brain connectivity analysis tools that require weighted graph comparisons. They can also inform the design of visual analysis tools in other domains, e.g. comparison of weighted social networks or biological pathways.
Alper et al. found that in tasks involving the comparison of weighted graphs, matrices outperform node-link diagrams REF .
14296360
Weighted graph comparison techniques for brain connectivity analysis
{ "venue": "CHI '13", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
The intimate relationship between human walking and running lies within the skeleto-muscular structure. This is expressed as a mapping that can transform computer vision derived gait signatures from running to walking and vice versa, for purposes of deployment in gait as a biometric or for animation in computer graphics. The computer vision technique can extract leg motion by temporal template matching with a model defined by forced coupled oscillators as the basis. The (biometric) signature is derived from Fourier analysis of the variation in the motion of the thigh and lower leg. ln fact, the mapping between these gait modes clusters better than the original signatures (of which running is the more potent) and can be used for recognition purposes alone, or to buttress both of the signatures. Moreover, the two signatures can be made invariant to gait mode by using the new mapping. There is much research in analysing human gait via computer vision, e.g. human motion (walking) tracking [I] . However, the potential of both walking and running gait [8, 9] as cues for person identification has only been explored recently. ' And these model-based approaches include analysis of the thigh and lower leg motion. Nevertheless, the relationship between human walking and running (in computer vision) remains imperfectly understood. Due to the fact that these two gaits are derived from the same skeletomuscular system, there must exist some correlation between them. Perhaps what makes it seem unfeasible is that human gait is not only a physiological property, but also a complex behavioural [IO] characteristic. That is, we learn how to walk and run when growing up and individuals with similar physiological traits may have their own particular way of walking and running. An understanding of the relationship between human walking and running is essential not only for further improving the existing automatic person recognition system using gait, but also as a foundation for other studies e.g. biomechanics, robotics and computer graphics animation. To determine a relationship between these two biomechanically distinct gaits, we deployed the first analytical human gait model developed earlier [9] which is invariant to walking and running, to automatically extract the lower limbs' motion using computer vision. This paper concentrates on the generic formulae to describe the relationship of the two different gaits. The mapping is based on the idea of phase modulation. It is then used to create signatures that are invariant to walking and running, and enhance the original gait signature. The analysis here has been applied to the largest database of its kind, comprising 20 subjects who are walking and running. The edge maps (Fig. la) obtained by applying.the Sobel edge operator with a threshold on the horizontal component are used in an evidence gathering process to extract the lower limb's motion. The angles of rotation when walking and running are extracted automatically via a 2-pass evidence gathering technique with an analytical model [9] as the underlying template. This model includes the vertical motion of the hip (Eq. l), the thigh rotation (Eq. 2) and the lower leg rotation (Eq. 3). Note that only the hip's vertical motion is included since the subjects are filmed walking and running on a motorised treadmill, and the resolution of the images used during the feature extraction is relatively low. Hence, the horizontal motion is less significant. The hip's relative vertical motion, S,,, is given as a function of time t by S,(t) = A,. sin(2wYt + 0, ) where A, is the amplitude of the vertical oscillation and $? is the phase shifi. The frequency is twice the fundamental, 9, since one gait cycle consists of two steps. The thigh rotation, Bn and the lower leg rotation, Ox, are derived from forced coupled oscillators joined in series where the upper and lower penduli model Br and BK, respectively, (see Fig. 1 b) as described by er = A cos(ogr + $ r ) + B sin(w+ +e,) + M,
Later, they further explored the relationship between walking and running that was expressed as a mapping based on phase modulation REF .
206845056
On the relationship of human walking and running: automatic person identification by gait
{ "venue": "Object recognition supported by user interaction for service robots", "journal": "Object recognition supported by user interaction for service robots", "mag_field_of_study": [ "Computer Science" ] }
Neural networks are powerful and flexible models that work well for many difficult learning tasks in image, speech and natural language understanding. Despite their success, neural networks are still hard to design. In this paper, we use a recurrent network to generate the model descriptions of neural networks and train this RNN with reinforcement learning to maximize the expected accuracy of the generated architectures on a validation set. On the CIFAR-10 dataset, our method, starting from scratch, can design a novel network architecture that rivals the best human-invented architecture in terms of test set accuracy. Our CIFAR-10 model achieves a test error rate of 3.65, which is 0.09 percent better and 1.05x faster than the previous state-of-the-art model that used a similar architectural scheme. On the Penn Treebank dataset, our model can compose a novel recurrent cell that outperforms the widely-used LSTM cell, and other state-of-the-art baselines. Our cell achieves a test set perplexity of 62.4 on the Penn Treebank, which is 3.6 perplexity better than the previous state-of-the-art model. The cell can also be transferred to the character language modeling task on PTB and achieves a state-of-the-art perplexity of 1.214.
In REF , the authors proposed to generate neural network architecture descriptions using a recurrent neural network (RNN).
12713052
Neural Architecture Search with Reinforcement Learning
{ "venue": "ArXiv", "journal": "ArXiv", "mag_field_of_study": [ "Computer Science", "Mathematics" ] }
Abstract. The use of multiple monitors for personal desktop computing is becoming more prevalent as the price of display technology decreases. The use of two monitors for a single desktop has been shown to have performance improvement in several studies. However, few studies have been performed with more than three monitors. As a result, we report an observational analysis of the use of a large tiled display containing nine monitors (in a 3x3 matrix). The total resolution of the large display is 3840x3072, for a total of 11,796,480 pixels. Over the course of six months we observed the behavior and actions of five users who used the display extensively as a desktop. We relate our observations, provide feedback concerning common usage of how people do and do not use the display, provide common scenarios and results of interviews, and give a series of design recommendations and guidelines for future designers of applications for high-resolution, tiled displays.
Ball and North REF observed the use of a large tiled display comprising nine 17" LCDs (a 3 X 3 title) for personal desktop computing.
15446325
An analysis of user behavior on high-resolution tiled displays
{ "venue": "In Interact 2005 Tenth IFIP TC13 International Conference on Human-Computer Interaction", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
This paper describes resolve, a system that uses decision trees to learn how to classify coreferent phrases in the domain of business joint ventures. An experiment is presented in which the performance of resolve is compared to the performance of a manually engineered set of rules for the same task. The results show that decision trees achieve higher performance than the rules in two of three evaluation metrics developed for the coreference task. In addition to achieving better performance than the rules, resolve provides a framework that facilitates the exploration of the types of knowledge that are useful for solving the coreference problem.
Similarly, REF use C4.5 to learn decision trees to classify pairs of phrases as coreferent or not.
1366616
Using Decision Trees for Coreference Resolution
{ "venue": "ArXiv", "journal": "ArXiv", "mag_field_of_study": [ "Computer Science" ] }
Kernel approximation via nonlinear random feature maps is widely used in speeding up kernel machines. There are two main challenges for the conventional kernel approximation methods. First, before performing kernel approximation, a good kernel has to be chosen. Picking a good kernel is a very challenging problem in itself. Second, high-dimensional maps are often required in order to achieve good performance. This leads to high computational cost in both generating the nonlinear maps, and in the subsequent learning and prediction process. In this work, we propose to optimize the nonlinear maps directly with respect to the classification objective in a data-dependent fashion. The proposed approach achieves kernel approximation and kernel learning in a joint framework. This leads to much more compact maps without hurting the performance. As a by-product, the same framework can also be used to achieve more compact kernel maps to approximate a known kernel. We also introduce Circulant Nonlinear Maps, which uses a circulant-structured projection matrix to speed up the nonlinear maps for high-dimensional data.
Yu et al. REF studied optimizing the sequences in a data-dependent fashion to achieve more compact maps.
15101058
Compact Nonlinear Maps and Circulant Extensions
{ "venue": "ArXiv", "journal": "ArXiv", "mag_field_of_study": [ "Mathematics", "Computer Science" ] }
Social media treats all users the same: trusted friend or total stranger, with little or nothing in between. In reality, relationships fall everywhere along this spectrum, a topic social science has investigated for decades under the theme of tie strength. Our work bridges this gap between theory and practice. In this paper, we present a predictive model that maps social media data to tie strength. The model builds on a dataset of over 2,000 social media ties and performs quite well, distinguishing between strong and weak ties with over 85% accuracy. We complement these quantitative findings with interviews that unpack the relationships we could not predict. The paper concludes by illustrating how modeling tie strength can improve social media design elements, including privacy controls, message routing, friend introductions and information prioritization.
Gilbert and Karahalios REF define a predictive model that maps social media data to tie strength on a dataset of over 2.000 social media ties.
6096102
Predicting tie strength with social media
{ "venue": "CHI", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
AbstractÐHidden Markov models (HMMs) have become the workhorses of the monitoring and event recognition literature because they bring to time-series analysis the utility of density estimation and the convenience of dynamic time warping. Once trained, the internals of these models are considered opaque; there is no effort to interpret the hidden states. We show that by minimizing the entropy of the joint distribution, an HMM's internal state machine can be made to organize observed activity into meaningful states. This has uses in video monitoring and annotation, low bit-rate coding of scene activity, and detection of anomalous behavior. We demonstrate with models of office activity and outdoor traffic, showing how the framework learns principal modes of activity and patterns of activity change. We then show how this framework can be adapted to infer hidden state from extremely ambiguous images, in particular, inferring 3D body orientation and pose from sequences of low-resolution silhouettes. Index TermsÐVideo activity monitoring, hidden Markov models, hidden state, parameter estimation, entropy minimization. ae A discrete-time hidden Markov model is a mixture model augmented with dynamics by conditioning its hidden state in time t on that of time tÀI. An HMM of x hidden states and Gaussian emission distributions is specified by the 4-tuple f ijj Y i Y " " i Y u u i gY I iY j x, where ijj are multinomial transition probabilities between hidden states, i is the initial probability of state i, and " " i Y u u i parameterize emission distributions for each state, in this case, means and covariances of the Gaussian densities Prob(x xj state i) = x x xY " " i Y u u i . The likelihood of a multivariate time-series is the product of all transition and emission probabilities associated with a hidden state sequence fs I Y s I Y Y s g, summed over all possible state sequences: Dynamic programming algorithms are available for the basic inference tasks: Given a time-series, the Viterbi algorithm computes the most probable hidden state sequence; the forwardbackward algorithm computes the data likelihood and expected sufficient statistics of hidden events such as state transitions and occupancies. These statistics are used in Baum-Welch parameter reestimation to maximize the likelihood of the model given the data. The expectation-maximization (EM) algorithm for HMMs consists of forward-backward analysis and Baum-Welch reestimation iterated to convergence at a local likelihood maximum. The principle of maximum likelihood (ML) is not valid for small data sets; in most vision tasks, the training data is rarely large enough to ªwash outº sampling artifacts (e.g., noise) that obscure the data-generating mechanism's essential regularities. It is not widely appreciated that this is an acute problem in hiddenvariable models, where most of the parameters are only supported by small subsets of the data. That, combined with the fact that the models have high-order symmetries that allow many different parameterizations of the same distribution, results in a learning problem that is riddled with local optima. Consequently, ML hidden-variable models are typically both under-fit, failing to capture the hidden structure of the signal, and over-fit, with a surfeit of weakly supported parameters that inadvertently model accidental properties of the signal such as noise and sample bias. This leads to poor predictive power and modest generalization that supports only limited inference tasks, such as classifying one of a small set of events of interest. We advocate replacing the Baum-Welch formulñ with parameter estimators that minimize entropy. Entropy minimization exploits the duality between learning and compression to approximate an optimal separation between essential properties (regularities and hidden structure in the data that should be captured by the model) and accidental properties (noise and sampling artifacts that should be ignored). In doing so, it reveals hidden structures in the data that tend to be highly correlated with meaningful partitions of the data-generating mechanism's behavior. In this article, we outline entropy minimization for HMMs and show how three video interpretation tasks can be treated as problems of inferring hidden state: annotating office activity, monitoring traffic intersections, and inferring 3D motion from monocular video. A common thread in these applications is the emphasis of inference over image processing or scene modeling; high-level inferences are made from relatively impoverished sensing via learned priors rather than engineered algorithms. Small HMMs and HMM-based hybrids have enjoyed wide success in spoken word and visual gesture recognition, partly because it is feasible to hand-design an adequate transition topology, which is the dominating constraint in the learning problem. However, their usefulness for more complicated systems is seriously curtailed by the fact that for models of nontrivial size, one must probe for an appropriate topology using very expensive search techniques. Although the literature of HMM-based visual event classification is extensive, to our knowledge it does not touch on the focus of this articleÐdiscovering a set of event types that efficiently describes action in the videoÐso we will only review it categorically. One may consult the proceedings [11], [8], [14] to see the bulk of the visual monitoring and event recognition literature in the last two years: Over 30 such papers use a small battery of HMMs as a postvisual processing event classification engine. Nearly all use the HMMs as a standard Bayesian MAP classifier: Each HMM is trained on a few examples of the event of interest; after training, novel events are classified via likelihood ratios. The HMMs have a hand-designed topology, typically corresponding to a band-diagonal transition matrix; the number of bands and states is found by experimentation. Related models, such as dynamic Bayes' nets, also require careful hand-crafting. The problem of finding appropriate HMM topologies is the subject of intense research interest outside of the vision literature; [5] reviews 12 of the most current approaches to learning HMM topology, all involving heuristic generate-and-test search or heuristic clustering methods. In this article, we will explore an unsupervised approach in which entropy minimization automatically induces a partitioning of the signal into events of interest. This framework yields monotonic (hillclimbing) algorithms for simultaneous estimation of model topology and parameters. As our applications will show, the result is a single, sparsely connected HMM containing the entire classification engine. This allows us to 844 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 22, NO. 8, AUGUST 2000 . M. Brand is with Mitsubishi
HMMs were also applied by Brand et al. REF to organize observed activities based on minimizing the entropy of component distributions for both office activities and outdoor traffic flows.
11878199
Discovery and Segmentation of Activities in Video
{ "venue": "IEEE Trans. Pattern Anal. Mach. Intell.", "journal": "IEEE Trans. Pattern Anal. Mach. Intell.", "mag_field_of_study": [ "Computer Science" ] }
We describe some new exactly solvable models of the structure of social networks, based on random graphs with arbitrary degree distributions. We give models both for simple unipartite networks, such as acquaintance networks, and bipartite networks, such as affiliation networks. We compare the predictions of our models to data for a number of real-world social networks and find that in some cases the models are in remarkable agreement with the data, while in others the agreement is poorer, perhaps indicating the presence of additional social structure in the network that is not captured by the random graph.
Newman et al. REF describe some novel uniquely solvable models of the structure of social networks based on random graphs with arbitrary degree distributions.
7415348
Random Graph Models of Social Networks
{ "venue": "Proc. Natl. Acad. Sci. USA", "journal": null, "mag_field_of_study": [ "Medicine", "Computer Science" ] }
ABSTRACT In recent years, advanced threat attacks are increasing, but the traditional network intrusion detection system based on feature filtering has some drawbacks which make it difficult to find new attacks in time. This paper takes NSL-KDD data set as the research object, analyses the latest progress and existing problems in the field of intrusion detection technology, and proposes an adaptive ensemble learning model. By adjusting the proportion of training data and setting up multiple decision trees, we construct a MultiTree algorithm. In order to improve the overall detection effect, we choose several base classifiers, including decision tree, random forest, kNN, DNN, and design an ensemble adaptive voting algorithm. We use NSL-KDD Test+ to verify our approach, the accuracy of the MultiTree algorithm is 84.2%, while the final accuracy of the adaptive voting algorithm reaches 85.2%. Compared with other research papers, it is proved that our ensemble model effectively improves detection accuracy. In addition, through the analysis of data, it is found that the quality of data features is an important factor to determine the detection effect. In the future, we should optimize the feature selection and preprocessing of intrusion detection data to achieve better results. INDEX TERMS Intrusion detection, ensemble learning, deep neural network, voting, MultiTree, NSL-KDD.
In 2019, Gao et al. used NSL-KDD dataset to test and develop an IDS by using an adaptive ensemble learning model REF .
195832172
An Adaptive Ensemble Machine Learning Model for Intrusion Detection
{ "venue": "IEEE Access", "journal": "IEEE Access", "mag_field_of_study": [ "Computer Science" ] }
Abstract-An energy-efficient opportunistic collaborative beamformer with one-bit feedback is proposed for ad hoc sensor networks over Rayleigh fading channels. In contrast to conventional collaborative beamforming schemes in which each source node uses channel state information to correct its local carrier offset and channel phase, the proposed beamforming scheme opportunistically selects a subset of source nodes whose received signals combine in a quasi-coherent manner at the intended receiver. No local phase-precompensation is performed by the nodes in the opportunistic collaborative beamformer. As a result, each node requires only one-bit of feedback from the destination in order to determine if it should or shouldn't participate in the collaborative beamformer. Theoretical analysis shows that the received signal power obtained with the proposed beamforming scheme scales linearly with the number of available source nodes. Since the the optimal node selection rule requires an exhaustive search over all possible subsets of source nodes, two low-complexity selection algorithms are developed. Simulation results confirm the effectiveness of opportunistic collaborative beamforming with the low-complexity selection algorithms.
REF considers opportunistic collaborative beamforming where a designated receiver node sends a single bit of feedback to each relay in order to dictate whether or not it should participate in beamforming process.
328755
Opportunistic Collaborative Beamforming with One-Bit Feedback
{ "venue": "2008 IEEE 9th Workshop on Signal Processing Advances in Wireless Communications", "journal": "2008 IEEE 9th Workshop on Signal Processing Advances in Wireless Communications", "mag_field_of_study": [ "Mathematics", "Computer Science" ] }
We revisit a problem introduced by Bharat and Broder almost a decade ago: how to sample random pages from a search engine's index using only the search engine's public interface? Such a primitive is particularly useful in creating objective benchmarks for search engines. The technique of Bharat and Broder suffers from two well recorded biases: it favors long documents and highly ranked documents. In this paper we introduce two novel sampling techniques: a lexicon-based technique and a random walk technique. Our methods produce biased sample documents, but each sample is accompanied by a corresponding "weight", which represents the probability of this document to be selected in the sample. The samples, in conjunction with the weights, are then used to simulate near-uniform samples. To this end, we resort to three well known Monte Carlo simulation methods: rejection sampling, importance sampling and the Metropolis-Hastings algorithm. We analyze our methods rigorously and prove that under plausible assumptions, our techniques are guaranteed to produce near-uniform samples from the search engine's index. Experiments on a corpus of 2.4 million documents substantiate our analytical findings and show that our algorithms do not have significant bias towards long or highly ranked documents. We use our algorithms to collect fresh data about the relative sizes of Google, MSN Search, and Yahoo!.
The study in REF propose several sampling methods via a search engine API to generate a "near-uniform" sample of documents (under certain plausible assumptions about the search engine).
5959072
Random sampling from a search engine's index
{ "venue": "WWW '06", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
Abstract: This paper studies the feasibility and the advantages of a distributed control strategy for a linear end-fire antenna array formation with UAVs. We first analyze the sensitivity of different interaction topologies to a low frequency sinusoidal disturbance affecting just one single vehicle, for antenna array sizes of up to 30 elements. The ETH Zurich Flying Machine Arena (FMA) is used as a test case. Then we consider a more realistic case of wind gust acting on all of the antennas. We show that under such conditions the simplified analysis does well at predicting the formation behavior under different distributed and decentralized control strategies.
A cluster of UAVs has been used as a phased array antenna in REF to show the feasibility of a distributed control strategy.
59335678
Distributed Control of Antenna Array with Formation of UAVs
{ "venue": null, "journal": "IFAC Proceedings Volumes", "mag_field_of_study": [ "Engineering" ] }
Background: One important type of information contained in biomedical research literature is the newly discovered relationships between phenotypes and genotypes. Because of the large quantity of literature, a reliable automatic system to identify this information for future curation is essential. Such a system provides important and up to date data for database construction and updating, and even text summarization. In this paper we present a machine learning method to identify these genotype-phenotype relationships. No large human-annotated corpus of genotype-phenotype relationships currently exists. So, a semi-automatic approach has been used to annotate a small labelled training set and a self-training method is proposed to annotate more sentences and enlarge the training set. The resulting machine-learned model was evaluated using a separate test set annotated by an expert. The results show that using only the small training set in a supervised learning method achieves good results (precision: 76.47, recall: 77.61, F-measure: 77.03) which are improved by applying a self-training method (precision: 77.70, recall: 77.84, F-measure: 77.77). Conclusions: Relationships between genotypes and phenotypes is biomedical information pivotal to the understanding of a patient's situation. Our proposed method is the first attempt to make a specialized system to identify genotype-phenotype relationships in biomedical literature. We achieve good results using a small training set. To improve the results other linguistic contexts need to be explored and an appropriately enlarged training set is required.
Khordad and Mercer introduce a machine learning method for identifying genotypephenotype relations which uses a semi-automatic approach for annotating more sentences to enlarge the training set REF .
7692550
Identifying genotype-phenotype relationships in biomedical text
{ "venue": "Journal of Biomedical Semantics", "journal": "Journal of Biomedical Semantics", "mag_field_of_study": [ "Medicine", "Computer Science" ] }
A Hindley-Milner type system such as ML's seems to prohibit typeindexed values, i.e., functions that map a family of types to a family of values. Such functions generally perform case analysis on the input types and return values of possibly different types. The goal of our work is to demonstrate how to program with type-indexed values within a Hindley-Milner type system. Our first approach is to interpret an input type as its corresponding value, recursively. This solution is type-safe, in the sense that the ML type system statically prevents any mismatch between the input type and function arguments that depend on this type. Such specific type interpretations, however, prevent us from combining different type-indexed values that share the same type. To meet this objection, we focus on finding a value-independent type encoding that can be shared by different functions. We propose and compare two solutions. One requires first-class and higher-order polymorphism, and, thus, is not implementable in the core language of ML, but it can be programmed using higher-order functors in Standard ML of New Jersey. Its usage, however, is clumsy. The other approach uses embedding/projection functions. It appears to be more practical. We demonstrate the usefulness of type-indexed values through examples including type-directed partial evaluation, C printf-like formatting, and subtype coercions. Finally, we discuss the tradeoffs between our approach and some other solutions based on more expressive typing disciplines.
Yang REF presents some approaches to enable type-safe programming of type-indexed values in ML which is similar to term level analysis of types.
1291390
Encoding types in ML-like languages
{ "venue": "Theor. Comput. Sci.", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
In this paper, we study the problem of designing objective functions for machine learning problems defined on finite sets. In contrast to traditional objective functions defined for machine learning problems operating on finite dimensional vectors, the new objective functions we propose are operating on finite sets and are invariant to permutations. Such problems are widespread, ranging from estimation of population statistics , via anomaly detection in piezometer data of embankment dams (Jung et al., 2015) , to cosmology (Ntampaka et al., 2016; Ravanbakhsh et al., 2016a) . Our main theorem characterizes the permutation invariant objective functions and provides a family of functions to which any permutation invariant objective function must belong. This family of functions has a special structure which enables us to design a deep network architecture that can operate on sets and which can be deployed on a variety of scenarios including both unsupervised and supervised learning tasks. We demonstrate the applicability of our method on population statistic estimation, point cloud classification, set expansion, and image tagging.
Zaheer et al. REF has proven that the function that can be represented the following form is permutation-invariant:
4870287
Deep Sets
{ "venue": "ArXiv", "journal": "ArXiv", "mag_field_of_study": [ "Mathematics", "Computer Science" ] }
Background : Test-First programming is regarded as one of the software development practices that can make unit tests to be more rigorous, thorough and effective in fault detection. Code coverage measures can be useful as indicators of the thoroughness of unit test suites, while mutation testing turned out to be effective at finding faults. Objective: This paper presents an experiment in which Test-First vs TestLast programming practices are examined with regard to branch coverage and mutation score indicator of unit tests. Method : Student subjects were randomly assigned to Test-First and TestLast groups. In order to further reduce pre-existing differences among subjects, and to get a more sensitive measure of our experimental effect, multivariate analysis of covariance was performed. Results: Multivariate tests results indicate that there is no statistically significant difference between Test-First and Test-Last practices on the combined dependent variables, i.e. branch coverage and mutation score indicator, (F (2, 9) = .52, p > .05), even if we control for the pre-test results, the subjects' experience, and when the subjects who showed deviations from the assigned programming technique are excluded from the analysis. Conclusion: According to the preliminary results presented in this paper, the benefits of the Test-First practice in this specific context can be considered minor. Limitation: It is probably the first-ever experimental evaluation of the impact of Test-First programming on mutation score indicator of unit tests and further experimentation is needed to establish evidence.
Madeyski REF investigated how test-first programming can impact branch coverage and mutation score indicators.
273744
The impact of test-first programming on branch coverage and mutation score indicator of unit tests: An experiment
{ "venue": null, "journal": null, "mag_field_of_study": [ "Computer Science" ] }
Abstract. We address the issue of efficiently automating assume-guarantee reasoning for simulation conformance between finite state systems and specifications. We focus on a non-circular assume-guarantee proof rule, and show that there is a weakest assumption that can be represented canonically by a deterministic tree automata (DTA). We then present an algorithm L T that learns this DTA automatically in an incremental fashion, in time that is polynomial in the number of states in the equivalent minimal DTA. The algorithm assumes a teacher that can answer membership and candidate queries pertaining to the language of the unknown DTA. We show how the teacher can be implemented using a model checker. We have implemented this framework in the COMFORT toolkit and we report encouraging results (over an order of magnitude improvement in memory consumption) on non-trivial benchmarks.
REF have applied a similar learning paradigm to automate assumption-guarantee reasoning for simulation conformance between finite systems and their specification.
11665819
Automated Assume-Guarantee Reasoning for Simulation Conformance
{ "venue": "In Proc. of CAV’05, volume 3576 of LNCS", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
Abstract-Autonomous vehicles need to plan trajectories to a specified goal that avoid obstacles. Previous approaches that used a constrained optimization approach to solve for finite sequences of optimal control inputs have been highly effective. For robust execution, it is essential to take into account the inherent uncertainty in the problem, which arises due to uncertain localization, modeling errors, and disturbances. Prior work has handled the case of deterministically bounded uncertainty. We present here an alternative approach that uses a probabilistic representation of uncertainty, and plans the future probabilistic distribution of the vehicle state so that the probability of collision with obstacles is below a specified threshold. This approach has two main advantages; first, uncertainty is often modeled more naturally using a probabilistic representation (for example in the case of uncertain localization); second, by specifying the probability of successful execution, the desired level of conservatism in the plan can be specified in a meaningful manner. The key idea behind the approach is that the probabilistic obstacle avoidance problem can be expressed as a Disjunctive Linear Program using linear chance constraints. The resulting Disjunctive Linear Program has the same complexity as that corresponding to the deterministic path planning problem with no representation of uncertainty. Hence the resulting problem can be solved using existing, efficient techniques, such that planning with uncertainty requires minimal additional computation. Finally, we present an empirical validation of the new method with a number of aircraft obstacle avoidance scenarios.
A probabilistic approach on path planning with obstacles is tackled by Blackmore et al. in REF .
18210188
A probabilistic approach to optimal robust path planning with obstacles
{ "venue": "2006 American Control Conference", "journal": "2006 American Control Conference", "mag_field_of_study": [ "Computer Science" ] }
We propose a deep convolutional neural network architecture codenamed Inception that achieves the new state of the art for classification and detection in the Im-ageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. By a carefully crafted design, we increased the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC14 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection. 1 978-1-4673-6964-0/15/$31.00 ©2015 IEEE
The latest state of the art in image classification is the GoogLeNet, a deep CNN which has 22 layers REF .
206592484
Going deeper with convolutions
{ "venue": "2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)", "journal": "2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)", "mag_field_of_study": [ "Computer Science" ] }
Abstract-Image scrambling is an important technique in digital image encryption and digital image watermarking. This paper's main purpose is to research how to scramble image by space-bit-plane operation (SBPO). Based on analyzing traditional bit operation of individual pixels image scrambling method, this paper proposed a new scrambling algorithm. The new scrambling algorithm combined with SBPO and chaotic sequence. First, every eight pixels from different areas of image were selected according to chaotic sequence, and grouped together to form a collection. Second, the SBPO was performed in every collection and built eight pixels of the image with new values. The scrambling image was generated when all pixels were processed. In this way, the proposed algorithm transforms drastically the statistical characteristic of original image information, so, it increases the difficulty of an unauthorized individual to break the encryption. The simulation results and the performance analysis show that the algorithm has large secret-key space, high security, fast scrambling speed and strong robustness, and is suitable for practical use to protect the security of digital image information over the Internet. Index Terms-image scrambling, image encryption, chaotic sequence, logistic map, space-bit-plane operation (SBPO)
Liu et al. REF presented space bit plane operation SBPO and chaotic sequence based image encryption scheme, which simultaneously changes the pixel value besides with pixel positions.
13611241
A Space-bit-plane Scrambling Algorithm for Image Based on Chaos
{ "venue": "Journal of Multimedia", "journal": "Journal of Multimedia", "mag_field_of_study": [ "Computer Science" ] }
State-of-the-art models for semantic segmentation are based on adaptations of convolutional networks that had originally been designed for image classification. However, dense prediction problems such as semantic segmentation are structurally different from image classification. In this work, we develop a new convolutional network module that is specifically designed for dense prediction. The presented module uses dilated convolutions to systematically aggregate multiscale contextual information without losing resolution. The architecture is based on the fact that dilated convolutions support exponential expansion of the receptive field without loss of resolution or coverage. We show that the presented context module increases the accuracy of state-of-the-art semantic segmentation systems. In addition, we examine the adaptation of image classification networks to dense prediction and show that simplifying the adapted network can increase accuracy.
Yu and Vladlen REF develop a dilated convolution to systematically aggregate multi-scale contextual information without losing resolution.
17127188
Multi-Scale Context Aggregation by Dilated Convolutions
{ "venue": "ICLR 2016", "journal": "arXiv: Computer Vision and Pattern Recognition", "mag_field_of_study": [ "Computer Science" ] }
We present a study investigating the use of vibrotactile feedback for touch-screen keyboards on PDAs. Such keyboards are hard to use when mobile as keys are very small. We conducted a laboratory study comparing standard buttons to ones with tactile feedback added. Results showed that with tactile feedback users entered significantly more text, made fewer errors and corrected more of the errors they did make. We ran the study again with users seated on an underground train to see if the positive effects transferred to realistic use. There were fewer beneficial effects, with only the number of errors corrected significantly improved by the tactile feedback. However, we found strong subjective feedback in favour of the tactile display. The results suggest that tactile feedback has a key role to play in improving interactions with touch screens.
Brewster et al. investigated the effect of vibrotactile feedback provided directly on the touch screen of mobile phones REF .
590675
Tactile feedback for mobile interactions
{ "venue": "CHI", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
Sentiment analysis on Twitter has attracted much attention recently due to its wide applications in both, commercial and public sectors. In this paper we present SentiCircles, a lexicon-based approach for sentiment analysis on Twitter. Different from typical lexicon-based approaches, which offer a fixed and static prior sentiment polarities of words regardless of their context, SentiCircles takes into account the co-occurrence patterns of words in different contexts in tweets to capture their semantics and update their pre-assigned strength and polarity in sentiment lexicons accordingly. Our approach allows for the detection of sentiment at both entity-level and tweet-level. We evaluate our proposed approach on three Twitter datasets using three different sentiment lexicons to derive word prior sentiments. Results show that our approach significantly outperforms the baselines in accuracy and F-measure for entity-level subjectivity (neutral vs. polar) and polarity (positive vs. negative) detections. For tweet-level sentiment detection, our approach performs better than the state-of-the-art SentiStrength by 4-5% in accuracy in two datasets, but falls marginally behind by 1% in F-measure in the third dataset.
Although not designed for multilingual purposes, REF presents SentiCircles, an interesting lexicon-based approach that takes into account the co-occurrence of words to provide a context-specific sentiment orientation.
11110108
Contextual semantics for sentiment analysis of Twitter
{ "venue": "Inf. Process. Manag.", "journal": "Inf. Process. Manag.", "mag_field_of_study": [ "Computer Science" ] }
This paper proposes a formalism for nonmonotonic reasoning based on prioritized argumentation. We argue that nonmonotonic reasoning in general can be viewed as selecting monotonic inferences by a simple notion of priority among inference rules. More importantly, these types of constrained inferences can be speci ed in a knowledge representation language where a theory consists of a collection of rules of rst order formulas and a priority among these rules. We recast default reasoning as a form of prioritized argumentation, and illustrate how the parameterized formulation of priority may be used to allow various extensions and modi cations to default reasoning. We also show that it is possible, but more di cult, to express prioritized argumentation by default logic: even some particular forms of prioritized argumentation cannot be represented modularly by defaults under the same language.
You et al. REF define a prioritized argumentative characterization of non-monotonic reasoning, by casting default reasoning as a form of prioritized argumentation.
15235246
Nonmonotonic Reasoning as Prioritized Argumentation
{ "venue": "IEEE Transactions on Knowledge and Data Engineering", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
Teaching plays a very important role in our society, by spreading human knowledge and educating our next generations. A good teacher will select appropriate teaching materials, impact suitable methodologies, and set up targeted examinations, according to the learning behaviors of the students. In the field of artificial intelligence, however, one has not fully explored the role of teaching, and pays most attention to machine learning. In this paper, we argue that equal attention, if not more, should be paid to teaching, and furthermore, an optimization framework (instead of heuristics) should be used to obtain good teaching strategies. We call this approach "learning to teach". In the approach, two intelligent agents interact with each other: a student model (which corresponds to the learner in traditional machine learning algorithms), and a teacher model (which determines the appropriate data, loss function, and hypothesis space to facilitate the training of the student model). The teacher model leverages the feedback from the student model to optimize its own teaching strategies by means of reinforcement learning, so as to achieve teacher-student co-evolution. To demonstrate the practical value of our proposed approach, we take the training of deep neural networks (DNN) as an example, and show that by using the learning to teach techniques, we are able to use much less training data and fewer iterations to achieve almost the same accuracy for different kinds of DNN models (e.g., multi-layer perceptron, convolutional neural networks and recurrent neural networks) under various machine learning tasks (e.g., image classification and text understanding).
In REF , a learning to teach framework is proposed that a teacher model, trained by optimization metadata, can guide the learning of student models.
13687188
Learning to Teach
{ "venue": "ArXiv", "journal": "ArXiv", "mag_field_of_study": [ "Computer Science" ] }
Abstract-An ensemble of approaches for reliable person re-identification is proposed in this paper. The proposed ensemble is built combining widely used person re-identification systems using different color spaces and some variants of state-of-the-art approaches that are proposed in this paper. Different descriptors are tested, and both texture and color features are extracted from the images; then the different descriptors are compared using different distance measures (e.g., the Euclidean distance, angle, and the Jeffrey distance). To improve performance, a method based on skeleton detection, extracted from the depth map, is also applied when the depth map is available. The proposed ensemble is validated on three widely used datasets (CAVIAR4REID, IAS, and VIPeR), keeping the same parameter set of each approach constant across all tests to avoid overfitting and to demonstrate that the proposed system can be considered a general-purpose person reidentification system. Our experimental results show that the proposed system offers significant improvements over baseline approaches. The source code used for the approaches tested in this paper will be available at https://www.dei.unipd.it/node/2357 and http://robotics.dei.unipd.it/ reid/.
REF proposed an ensemble of different approaches and combined them on color spaces.
55375061
Ensemble of Different Approaches for a Reliable Person Re-identification System
{ "venue": null, "journal": "Applied Computing and Informatics", "mag_field_of_study": [ "Computer Science" ] }
Blind gain and phase calibration (BGPC) is a structured bilinear inverse problem, which arises in many applications, including inverse rendering in computational relighting (albedo estimation with unknown lighting), blind phase and gain calibration in sensor array processing, and multichannel blind deconvolution. The fundamental question of the uniqueness of the solutions to such problems has been addressed only recently. In a previous paper, we proposed studying the identifiability in bilinear inverse problems up to transformation groups. In particular, we studied several special cases of blind gain and phase calibration, including the cases of subspace and joint sparsity models on the signals, and gave sufficient and necessary conditions for identifiability up to certain transformation groups. However, there were gaps between the sample complexities in the sufficient conditions and the necessary conditions. In this paper, under a mild assumption that the signals and models are generic, we bridge the gaps by deriving tight sufficient conditions with optimal sample complexities.
The question of the uniqueness of the solution (h 0 , x 0 ) (up-to global scaling) of the multichannel bilinear problems of the form y n =ĥ 0 Ĉ x 0,n has been studied in REF .
206798587
Optimal Sample Complexity for Blind Gain and Phase Calibration
{ "venue": "ArXiv", "journal": "ArXiv", "mag_field_of_study": [ "Mathematics", "Computer Science" ] }
In compressed sensing, one takes n < N samples of an N -dimensional vector x0 using an n × N matrix A, obtaining undersampled measurements y = Ax0. For random matrices with Gaussian i.i.d entries, it is known that, when x0 is k-sparse, there is a precisely determined phase transition: for a certain region in the (k/n, n/N )-phase diagram, convex optimization min ||x||1 subject to y = Ax, x ∈ X N typically finds the sparsest solution, while outside that region, it typically fails. It has been shown empirically that the same property -with the same phase transition location -holds for a wide range of non-Gaussian random matrix ensembles. We report extensive experiments showing that the Gaussian phase transition also describes numerous deterministic matrices, including Spikes and Sines, Spikes and Noiselets, Paley Frames, Delsarte-Goethals Frames, Chirp Sensing Matrices, and Grassmannian Frames. Namely, for each of these deterministic matrices in turn, for a typical k-sparse object, we observe that convex optimization is successful over a region of the phase diagram that coincides with the region known for Gaussian random matrices. Our experiments considered coefficients constrained to X N for four different sets X ∈ {[0, 1], R+, R, C}, and the results establish our finding for each of the four associated phase transitions.
This class of deterministic matrices is shown to have the same phase transition phenomenon as observed in the Gaussian random matrix case, see REF for more details.
9852991
Deterministic matrices matching the compressed sensing phase transitions of Gaussian random matrices
{ "venue": "Proceedings of the National Academy of Sciences of the United States of America", "journal": "Proceedings of the National Academy of Sciences of the United States of America", "mag_field_of_study": [ "Medicine", "Engineering" ] }
Abstract-Unpredictable node mobility, low node density, and lack of global information make it challenging to achieve effective data forwarding in Delay-Tolerant Networks (DTNs). Most of the current data forwarding schemes choose the nodes with the best cumulative capability of contacting others as relays to carry and forward data, but these nodes may not be the best relay choices within a short time period due to the heterogeneity of transient node contact characteristics. In this paper, we propose a novel approach to improve the performance of data forwarding with a short time constraint in DTNs by exploiting the transient social contact patterns. These patterns represent the transient characteristics of contact distribution, network connectivity and social community structure in DTNs, and we provide analytical formulations on these patterns based on experimental studies of realistic DTN traces. We then propose appropriate forwarding metrics based on these patterns to improve the effectiveness of data forwarding. When applied to various data forwarding strategies, our proposed forwarding metrics achieve much better performance compared to existing schemes with similar forwarding cost.
A recent work REF of Gao et al. proposed a novel data forwarding strategy which exploits the transient social contact patterns in social networks.
249481
On Exploiting Transient Social Contact Patterns for Data Forwarding in Delay-Tolerant Networks
{ "venue": "IEEE Transactions on Mobile Computing", "journal": "IEEE Transactions on Mobile Computing", "mag_field_of_study": [ "Computer Science" ] }
Fractal geometry has been applied widely in the analysis of medical images to characterize the irregular complex tissue structures that do not lend themselves to straightforward analysis with traditional Euclidean geometry. In this study, we treat the nonfractal behaviour of medical images over large-scale ranges by considering their box-counting fractal dimension as a scale-dependent parameter rather than a single number. We describe this approach in the context of the more generalized Rényi entropy, in which we can also compute the information and correlation dimensions of images. In addition, we describe and validate a computational improvement to box-counting fractal analysis. This improvement is based on integral images, which allows the speedup of any box-counting or similar fractal analysis algorithm, including estimation of scale-dependent dimensions. Finally, we applied our technique to images of invasive breast cancer tissue from 157 patients to show a relationship between the fractal analysis of these images over certain scale ranges and pathologic tumour grade (a standard prognosticator for breast cancer). Our approach is general and can be applied to any medical imaging application in which the complexity of pathological image structures may have clinical value.
Other authors (see REF ) treat the inconsistency of the fractal characteristics of medical images over large scale-ranges, proving that the fractal dimension depends on scale at which the object of interest is considered.
3212012
Scale-Specific Multifractal Medical Image Analysis
{ "venue": "Computational and Mathematical Methods in Medicine", "journal": "Computational and Mathematical Methods in Medicine", "mag_field_of_study": [ "Computer Science", "Medicine", "Mathematics" ] }
Abstract-In this paper, a cognitive-radio-inspired asymmetric network coding (CR-AsNC) scheme is proposed for multipleinput-multiple-output (MIMO) cellular transmissions, where information exchange among users and base-station (BS) broadcasting can be accomplished simultaneously. The key idea is to apply the concept of cognitive radio (CR) in network coding transmissions, where the BS tries sending new information while helping users' transmissions as a relay. In particular, we design an asymmetric network coding method for information exchange between the BS and the users, although many existing works consider the design of network coding in symmetric scenarios. To approach the optimal performance, an iterative precoding design for CR-AsNC is first developed. Then, a channeldiagonalization-based precoding design with low complexity is proposed, to which power allocation can be optimized with a closed-form solution. The simulation results show that the proposed CR-AsNC scheme with precoding optimization can significantly improve system transmission performance. Index Terms-Multiple-input-multiple-output (MIMO), multiway transmission, network coding, precoding design.
In REF , the authors aim to minimize the mean square error at the destinations and design an asymmetric network coding solution for data transmission between the BS and the users.
2643965
On the Design of Cognitive-Radio-Inspired Asymmetric Network Coding Transmissions in MIMO Systems
{ "venue": "IEEE Transactions on Vehicular Technology", "journal": "IEEE Transactions on Vehicular Technology", "mag_field_of_study": [ "Computer Science" ] }
The probabilistic guarded-command language pGCL [15] contains both demonic and probabilistic nondeterminism, which makes it suitable for reasoning about distributed random algorithms [14] . Proofs are based on weakest precondition semantics, using an underlying logic of real-(rather than Boolean-) valued functions. We present a mechanization of the quantitative logic for pGCL [16] using the HOL theorem prover [4], including a proof that all pGCL commands satisfy the new condition sublinearity, the quantitative generalization of conjunctivity for standard The mechanized theory also supports the creation of an automatic proof tool which takes as input an annotated pGCL program and its partial correctness specification, and derives from that a sufficient set of verification conditions. This is employed to verify the partial correctness of the probabilistic voting stage in Rabin's mutual-exclusion algorithm [10] .
al REF also formalized the probabilistic guarded-command language ( pGCL) in HOL.
931357
Probabilistic guarded commands mechanized in HOL
{ "venue": "IN PROCEEDINGS OF QAPL 2004", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
Next generation multicore applications will process massive amounts of data with significant sharing. Data movement and management impacts memory access latency and consumes power. Therefore, harnessing data locality is of fundamental importance in future processors. We propose a scalable, efficient shared memory cache coherence protocol that enables seamless adaptation between private and logically shared caching of on-chip data at the fine granularity of cache lines. Our data-centric approach relies on inhardware yet low-overhead runtime profiling of the locality of each cache line and only allows private caching for data blocks with high spatio-temporal locality. This allows us to better exploit the private caches and enable low-latency, low-energy memory access, while retaining the convenience of shared memory. On a set of parallel benchmarks, our lowoverhead locality-aware mechanisms reduce the overall energy by 25% and completion time by 15% in an NoC-based multicore with the Reactive-NUCA on-chip cache organization and the ACKwise limited directory-based coherence protocol.
Kurian REF propose a locality-aware adaptive coherence protocol to manage the distributed private caches in CMPs.
967655
The locality-aware adaptive cache coherence protocol
{ "venue": "ISCA", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
A technical infrastructure for storing, querying and managing RDF data is a key element in the current semantic web development. Systems like Jena, Sesame or the ICS-FORTH RDF Suite are widely used for building semantic web applications. Currently, none of these systems supports the integrated querying of distributed RDF repositories. We consider this a major shortcoming since the semantic web is distributed by nature. In this paper we present an architecture for querying distributed RDF repositories by extending the existing Sesame system. We discuss the implications of our architecture and propose an index structure as well as algorithms for query processing and optimization in such a distributed context. Categories and Subject Descriptors The need for handling multiple sources of knowledge and information is quite obvious in the context of semantic web applications. First of all we have the duality of schema and information content where multiple information sources can adhere to the same schema. Further, the re-use, extension and combination of multiple schema files is considered to be common practice on the semantic web [7] . Despite the inherently distributed nature of the semantic web, most current RDF infrastructures (for example [4] ) store information locally as a single knowledge repository, i.e., RDF models from remote sources are replicated locally and merged into a single model. Distribution is virtually retained through the use of namespaces to distinguish between different models. We argue that many interesting applications on the semantic web would benefit from or even require an RDF infrastructure that supports real distribution of information sources that can be accessed from a single point. Beyond Copyright is held by the author/owner(s). May 17-22, 2004, New York, New York, USA. ACM 1-58113-844-X/04/0005. the argument of conceptual adequacy, there are a number of technical reasons for real distribution in the spirit of distributed databases: The commonly used approach of using a local copy of a remote source suffers from the problem of changing information. Directly using the remote source frees us from the need of managing change as we are always working with the original. Keeping different sources separate from each other provides us with a greater flexibility concerning the addition and removal of sources. In the distributed setting, we only have to adjust the corresponding system parameters. In many cases, it will even be unavoidable to adopt a distributed architecture, for example in scenarios in which the data is not owned by the person querying it. In this case, it will often not be permitted to copy the data. More and more information providers, however, create interfaces that can be used to query the information. The same holds for cases where the information sources are too large to just create a single model containing all the information, but they still can be queried using a special interface (Musicbrainz is an example of this case). Further, we might want to include sources that are not available in RDF, but that can be wrapped to produce query results in RDF format. A typical example is the use of a free-text index as one source of information. Sometimes there is not even a fixed model that could be stored in RDF, because the result of a query is only calculated at runtime (Google, for instance, provides a programming interface that could be wrapped into an RDF source). In all these scenarios, we are forced to access external information sources from an RDF infrastructure without being able to create a local copy of the information we want to query. On the semantic web, we almost always want to combine such external sources with each other and with additional schema knowledge. This confirms the need to consider an RDF infrastructure that deals with information sources that are actually distributed across different locations. In this paper, we address the problem of integrated access to distributed RDF repositories from a practical point of view. In particular, starting from a real-life use case where we are considering a number of distributed sources that contain research results in the form of publications, we take the existing RDF storage and retrieval system Sesame and describe how the architecture and the query processing methods of the system have to be extended in order to move to a distributed setting. 631
Stuckenschmidt et al. REF proposed an index structure for distributed RDF repositories based on schema paths (property chains) rather than on statistical summaries of the graph-structure of the data.
5118757
Index structures and algorithms for querying distributed RDF repositories
{ "venue": "WWW '04", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
Abstract-We introduce a weakly supervised approach for learning human actions modeled as interactions between humans and objects. Our approach is human-centric: We first localize a human in the image and then determine the object relevant for the action and its spatial relation with the human. The model is learned automatically from a set of still images annotated only with the action label. Our approach relies on a human detector to initialize the model learning. For robustness to various degrees of visibility, we build a detector that learns to combine a set of existing part detectors. Starting from humans detected in a set of images depicting the action, our approach determines the action object and its spatial relation to the human. Its final output is a probabilistic model of the humanobject interaction, i.e., the spatial relation between the human and the object. We present an extensive experimental evaluation on the sports action data set from [1], the PASCAL Action 2010 data set [2] , and a new human-object interaction data set.
For instance, Prest et al. REF used an action classifier and a human detector to determine the relevant object for an action and its relative location to a human.
1819788
Weakly Supervised Learning of Interactions between Humans and Objects
{ "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "mag_field_of_study": [ "Computer Science", "Medicine" ] }
Wireless Sensor Networks (WSNs) is a mature research field that can be tracked back to the 1980, when the United States Defense Advanced Research Projects Agency (DARPA) started the Distributed Sensor Network (DSN) program to formally explore the challenges in implementing WSNs. Since then, WSNs progressed into academia and found home in civilian scientific research. Today, WSNs remains an active research topic with over 64,825 publications in IEEEXplore alone. Recent advances in semiconductor and networking technologies are driving the ubiquitous deployment of large-scale WSNs. These technologies enable a new generation of WSNs that differ greatly from networks studied as recently as 5 to 10 years ago. Today's stateof-the-art WSNs hardware platforms have lower costs and are expected to last longer opening the way into their deployment for any application. However, existing WSNs deployments are limited to experimental few hundred nodes networks. This talk explores the current challenges hindering the real-life deployment of large-scale WSN systems. The focus is to map historical WSNs deployment challenges and their severity against today's network and sensor hardware capabilities. Isolating factors that no longer have great effect on the deployment and maintenance costs of WSNs, is expected to instigate a wave of next generation WSNs deployment. To facilitate this aim, the talk will be focusing on WSN-based protocols/algorithms developed to solve current problems facing the deployment of large-scale WSNs. The talk draw on lessons learned from pilot deployments in different application areas including: space exploration, minerals mapping, heat diffusion, border security and flood prediction and control.
In this REF paper, author discusses the challenges in routing protocols that hinders the massive deployment of such networks.
207230931
null
null
We consider the automated recognition of human actions in surveillance videos. Most current methods build classifiers based on complex handcrafted features computed from the raw inputs. Convolutional neural networks (CNNs) are a type of deep model that can act directly on the raw inputs. However, such models are currently limited to handling 2D inputs. In this paper, we develop a novel 3D CNN model for action recognition. This model extracts features from both the spatial and the temporal dimensions by performing 3D convolutions, thereby capturing the motion information encoded in multiple adjacent frames. The developed model generates multiple channels of information from the input frames, and the final feature representation combines information from all channels. To further boost the performance, we propose regularizing the outputs with high-level features and combining the predictions of a variety of different models. We apply the developed models to recognize human actions in the real-world environment of airport surveillance videos, and they achieve superior performance in comparison to baseline methods.
In REF a 3D extension of the standard 2D convolutional neural networks (CNNs) was introduced, where information from both the space and the time dimension are included by performing 3D convolution.
1923924
3D Convolutional Neural Networks for Human Action Recognition
{ "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "mag_field_of_study": [ "Computer Science", "Medicine" ] }
Abstract. The SPIRIT search engine provides a test bed for the development of web search technology that is specialised for access to geographical information. Major components include the user interface, geographical ontology, maintenance and retrieval functions for a test collection of web documents, textual and spatial indexes, relevance ranking and metadata extraction. Here we summarise the functionality and interaction between these components before focusing on the design of the geo-ontology and the development of spatio-textual indexing methods. The geo-ontology supports functionality for disambiguation, query expansion, relevance ranking and metadata extraction. Geographical place names are accompanied by multiple geometric footprints and qualitative spatial relationships. Spatial indexing of documents has been integrated with text indexing through the use of spatio-textual keys in which terms are concatenated with spatial cells to which they relate. Preliminary experiments demonstrate considerable performance benefits when compared with pure text indexing and with text indexing followed by a spatial filtering stage.
The SPIRIT spatial search engine REF has shown ontology to be useful in searching web documents with spatial contents.
1198106
The SPIRIT Spatial Search Engine: Architecture, Ontologies and Spatial Indexing
{ "venue": "GIScience", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
A utility environment is dynamic in nature. It has to deal with a large number of resources of varied types, as well as multiple combinations of those resources. By embedding operator and user level policies in resource models, specifications of composite resources may be automatically generated to meet these multiple and varied requirements. This paper describes a model for automated policybased construction of complex environments. We pose the policy problem as a goal satisfaction problem that can be addressed using a constraint satisfaction formulation. We show how a variety of construction policies can he accommodated by the resource models during resource composition. We are implementing this model in a prototype that uses CIM as the underlying resource model and exploring issues that arise as a result of that implementation.
REF describes a model for automated policy-based construction as a goal satisfaction problem in utility computing environments.
14282214
Automated policy-based resource construction in utility computing environments
{ "venue": "2004 IEEE/IFIP Network Operations and Management Symposium (IEEE Cat. No.04CH37507)", "journal": "2004 IEEE/IFIP Network Operations and Management Symposium (IEEE Cat. No.04CH37507)", "mag_field_of_study": [ "Computer Science" ] }
Numerous important problems can be framed as learning from graph data. We propose a framework for learning convolutional neural networks for arbitrary graphs. These graphs may be undirected, directed, and with both discrete and continuous node and edge attributes. Analogous to image-based convolutional networks that operate on locally connected regions of the input, we present a general approach to extracting locally connected regions from graphs. Using established benchmark data sets, we demonstrate that the learned feature representations are competitive with state of the art graph kernels and that their computation is highly efficient.
Convolutional neural network for graphs REF learns feature representations for the graphs as a whole.
1430801
Learning Convolutional Neural Networks for Graphs
{ "venue": "ArXiv", "journal": "ArXiv", "mag_field_of_study": [ "Computer Science", "Mathematics" ] }
Abstract. Automatic liver segmentation from CT volumes is a crucial prerequisite yet challenging task for computer-aided hepatic disease diagnosis and treatment. In this paper, we present a novel 3D deeply supervised network (3D DSN) to address this challenging task. The proposed 3D DSN takes advantage of a fully convolutional architecture which performs efficient end-to-end learning and inference. More importantly, we introduce a deep supervision mechanism during the learning process to combat potential optimization difficulties, and thus the model can acquire a much faster convergence rate and more powerful discrimination capability. On top of the high-quality score map produced by the 3D DSN, a conditional random field model is further employed to obtain refined segmentation results. We evaluated our framework on the public MICCAI-SLiver07 dataset. Extensive experiments demonstrated that our method achieves competitive segmentation results to state-of-the-art approaches with a much faster processing speed.
The 3D deeply supervised network (DSN), which had a much faster convergence and better discrimination capability, could be extended to other medical applications REF .
18564860
3D Deeply Supervised Network for Automatic Liver Segmentation from CT Volumes
{ "venue": "ArXiv", "journal": "ArXiv", "mag_field_of_study": [ "Computer Science" ] }
Few-shot learning has become essential for producing models that generalize from few examples. In this work, we identify that metric scaling and metric task conditioning are important to improve the performance of few-shot algorithms. Our analysis reveals that simple metric scaling completely changes the nature of few-shot algorithm parameter updates. Metric scaling provides improvements up to 14% in accuracy for certain metrics on the mini-Imagenet 5-way 5-shot classification task. We further propose a simple and effective way of conditioning a learner on the task sample set, resulting in learning a task-dependent metric space. Moreover, we propose and empirically test a practical end-to-end optimization procedure based on auxiliary task co-training to learn a task-dependent metric space. The resulting few-shot learning model based on the task-dependent scaled metric achieves state of the art on mini-Imagenet. We confirm these results on another few-shot dataset that we introduce in this paper based on CIFAR100.
TADAM REF produces a task-dependent metric space based on conditioning a learner on the task set.
44061218
TADAM: Task dependent adaptive metric for improved few-shot learning
{ "venue": "Advances in Neural Information Processing Systems 31, 2018", "journal": null, "mag_field_of_study": [ "Computer Science", "Mathematics" ] }
Learning-based hashing algorithms are "hot topics" because they can greatly increase the scale at which existing methods operate. In this paper, we propose a new learning-based hashing method called "fast supervised discrete hashing" (FSDH) based on "supervised discrete hashing" (SDH). Regressing the training examples (or hash code) to the corresponding class labels is widely used in ordinary least squares regression. Rather than adopting this method, FSDH uses a very simple yet effective regression of the class labels of training examples to the corresponding hash code to accelerate the algorithm. To the best of our knowledge, this strategy has not previously been used for hashing. Traditional SDH decomposes the optimization into three sub-problems, with the most critical sub-problem -discrete optimization for binary hash codes -solved using iterative discrete cyclic coordinate descent (DCC), which is time-consuming. However, FSDH has a closed-form solution and only requires a single rather than iterative hash code-solving step, which is highly efficient. Furthermore, FSDH is usually faster than SDH for solving the projection matrix for least squares regression, making FSDH generally faster than SDH. For example, our results show that FSDH is about 12-times faster than SDH when the number of hashing bits is 128 on the CIFAR-10 data base, and FSDH is about 151-times faster than FastHash when the number of hashing bits is 64 on the MNIST data-base. Our experimental results show that FSDH is not only fast, but also outperforms other comparative methods.
Fast Supervised Discrete Hashing (FSDH) REF enhances SDH using an exchangeable regression trick that leads to a closed-form solution for efficient binary codes.
206767101
Fast Supervised Discrete Hashing
{ "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "mag_field_of_study": [ "Computer Science", "Mathematics", "Medicine" ] }
Two mobile agents (robots) have to meet in an a priori unknown bounded terrain modeled as a polygon, possibly with polygonal obstacles. Agents are modeled as points, and each of them is equipped with a compass. Compasses of agents may be incoherent. Agents construct their routes, but the actual walk of each agent is decided by the adversary: the movement of the agent can be at arbitrary speed, the agent may sometimes stop or go back and forth, as long as the walk of the agent in each segment of its route is continuous, does not leave it and covers all of it. We consider several scenarios, depending on three factors: (1) obstacles in the terrain are present, or not, (2) compasses of both agents agree, or not, (3) agents have or do not have a map of the terrain with their positions marked. The cost of a rendezvous algorithm is the worst-case sum of lengths of the agents' trajectories until their meeting. For each scenario we design a deterministic rendezvous algorithm and analyze its cost. We also prove lower bounds on the cost of any deterministic rendezvous algorithm in each case. For all scenarios these bounds are tight.
An example is in REF , where the Rendezvous point computed by one of the algorithms is either the central vertex or the midpoint of the central segment of the medial axis of a polygon.
155924
Asynchronous deterministic rendezvous in bounded terrains
{ "venue": "ArXiv", "journal": "ArXiv", "mag_field_of_study": [ "Mathematics", "Computer Science" ] }
Adoption of security standards has the capability of improving the security level in an organization as well as to provide additional benefits and possibilities to the organization. However mapping of used standards has to be done when more than one security standard is employed in order to prevent redundant activities, not optimal resource management and unnecessary outlays. Employment of security ontology to map different standards can reduce the mapping complexity however the choice of security ontology is of high importance and there are no analyses on security ontology suitability for adaptive standards mapping. In this paper we analyze existing security ontologies by comparing their general properties, OntoMetric factors and ability to cover different security standards. As none of the analysed security ontologies were able to cover more than 1/3 of security standards, we proposed a new security ontology, which increased coverage of security standards compared to the existing ontologies and has a better branching and depth properties for ontology visualization purposes. During this research we mapped 4 security standards (ISO 27001, PCI DSS, ISSA 5173 and NISTIR 7621) to the new security ontology, therefore this ontology and mapping data can be used for adaptive mapping of any set of these security standards to optimize usage of multiple security standards in an organization.
In REF the new exhaustive ontology was proposed, which increased coverage of security standards compared to the existing ontologies and has better branching and depth properties for ontology visualization purposes.
14669334
Security Ontology for Adaptive Mapping of Security Standards
{ "venue": "Int. J. Comput. Commun. Control", "journal": "Int. J. Comput. Commun. Control", "mag_field_of_study": [ "Computer Science" ] }
We extend Cyclone, a type-safe polymorphic language at the C level of abstraction, with threads and locks. Data races can violate type safety in Cyclone. An extended type system statically guarantees their absence by enforcing that thread-shared data is protected via locking and that threadlocal data does not escape the thread that creates it. The extensions interact smoothly with parametric polymorphism and region-based memory management. We present a formal abstract machine that models the need to prevent races, a polymorphic type system for the machine that supports thread-local data, and a corresponding type-safety result.
It features region-based memory management, and more recently threads and locks REF , via a sophisticated type system.
3107854
Type-safe multithreading in cyclone
{ "venue": "TLDI '03", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
Promise is one of the most powerful tools producing trust and facilitating cooperation, and sticking to the promise is deemed as a key social norm in social interactions. The present study explored the extent to which promise would influence investors' decision-making in the trust game where promise had no predictive value regarding trustees' reciprocation. In addition, we examined the neural underpinnings of the investors' outcome processing related to the trustees' promise keeping and promise breaking. Consistent with our hypothesis, behavioral results indicated that promise could effectively increase the investment frequency of investors. Electrophysiological results showed that, promise induced larger differentiated-FRN responses to the reward and non-reward discrepancy. Taken together, these results suggested that promise would promote cooperative behavior, while breach of promise would be regarded as a violation of the social norm, corroborating the vital role of nonenforceable commitment in social decision making. Even in the early human society, some basic forms of cooperative agreements already existed to maintain prosocial connections like trust and cooperation [1] . As one of these primitive agreements, promise is expressed orally and is non-binding in nature, which aims to convey the information that one is trustworthy and reliable to other partners in social interactions [2] . Despite its non-enforceable nature, in the contemporary society, a large number of social exchanges still rely on such oral commitments, mainly due to its simple, valid and efficient features. In the field of behavioral and experimental economics, trust game (TG) was designed to investigate people's trust and cooperative behaviors as well as various factors contributing to the trust of investors. Therefore, trust game can well be adopted to examine the influence of promise on cooperative behaviors as well as its effect on outcome evaluation subsequent to the action of keeping or breaking the promise. Trust game, initially proposed by Berg et al. [3], is a oneshot game between two anonymous players, an investor and a trustee. The investor is firstly assigned with some tokens, and can choose to keep all of them or invest some of the endowment to the trustee. The tokens invested to trustee are multiplied. Then the privilege goes to trustee who can choose to keep all the multiplied tokens or pay certain amount back to the investor. In the one-shot TG experimental setting, factors such as reputation, revenge and punishment do not come into play to affect any players' monetary payoffs in a direct manner. Thus, theoretically, there is no economical reason for a rational trustee to reciprocate. Based on such a belief, the Nash equilibrium is that the investor chooses to keep all the endowed tokens and not to invest them. However, contrary to the prediction of classical game theory, previous studies showed that, most of the investors did invest considerable amounts, and many trustees did manifest certain degree of reciprocity [4] . Therefore, we can safely conclude that social preference factors like trust and trustworthiness might play vital roles in such scenarios. Until now, a large number of studies from both behavioral economics and neuroeconomics have made great efforts to investigate the behavioral and neural underpinnings of trust and trustworthiness [5, 6] . As mentioned above, promise, as an important mechanism to foster trust, may bring interesting phenomena to be explored in trust game settings. On the other side, promise, non-binding by its nature, may not only be kept, but also be breached. Therefore, it's an open and intriguing question to investigate how investors will respond to the orally non-binding commitment from trustees. Additionally, the brain mechanisms involved in such non-binding cooperative agreements also remain to be further clarified. To resolve these elusive issues, one pioneering neuroimaging study has examined the neural correlates of promise keeping and promise breaking from the perspective of trustees [1] . It is discovered that breach of promise leads to increased activation in the dorsal lateral prefrontal cortex (DLPFC), anterior cingulate cortex (ACC) and amygdala, which indicates that breaking the promise involves an emotional conflict over social norm obedience. In addition, the breach of promise can be predicted by brain activation in anterior insula, ACC and inferior frontal gyrus during promise making, suggesting that malevolence can be reflected in the brain pattern long before the action actually takes place. However, up to now, rare do we know about how sticking to the promise and violation of it would be evaluated and experienced from the perspective of investors. In the present study, we modified the one-shot trust game and named it as promise-trust game (promise-TG). For the purpose of capturing the subjective evaluation of trustees' commitment and how it affects investors' subsequent investment behavior, we mainly focused on the role of investors. The major difference between the task in the current study and the classical trust game lies in the message-leaving stage, which is implemented before the investor decides whether to invest or keep the initial tokens on that round [1, 7] . At the message-leaving stage, the trustee can either make a promise by leaving the message "I promise to give back half of the money." or give up the message-leaving right. In the latter case, "The other person did not leave a message." would be informed to the investor. In addition, to make the experimental design simple and concise, we adopted a dichotomic design for monetary parameters in the current study. To be specific, the investor has only two options to choose from. He/she may either invest or keep the entire tokens on each round. The trustee only has two alternatives as well, and can freely decide whether to keep or to break a promise. Decisions of both exchange partners will be implemented and thus cause monetary consequences. Behaviorally, we predict that the simple non-binding promise that comes from trustees would increase the expectation of the investors to receive refund and thus increase the likelihood to invest to their counterparts accordingly. In the current study, we adopted Event-related Potentials (ERPs) to track the temporal dynamics of brain activity during outcome evaluation resulted from the fulfillment of commitment and the breach of promise. FRN is the negative deflection peaking around the 250-350 ms period upon feedback presentation, which shows maximum amplitude over medial frontal cortex. Because FRN is often found to reflect various aspects of the outcome, especially outcome valence, it is adopted to examine reward processing as the result of reciprocity or nonreciprocity. Given the important role of FRN in the decision-making, there are two popular theories to account for its significance in reflecting the underlying mechanism of outcome evaluation, which are reinforcement learning theory and the motivational theory. According to the reinforcement account of FRN, unexpected losses would induce relatively larger FRNs than gains, which has been widely replicated in the past decade [8] . In the social domain, since equal split of assets is accepted as a long-established social norm, unfair offers are unexpected and will elicit more pronounced FRN than fair ones in the ultimatum game [9] [10] [11] . In addition, existing literatures showed that various factors might have an influence on the subjective expectancy [12, 13] . For example, in one of our previous studies, we discovered that effort strengthened the expectation toward positive outcomes, and violation of this stronger expectancy led to a larger FRN deflection [14] . Given the prediction of the reinforcement theory, in the current study, when trustees made a promise at the message-leaving stage, investors' expectancy toward trustees' reciprocity might increase. Thus, when this expectancy was violated as reflected by the breach of promise, a stronger prediction error might occur, which might give rise to a stronger FRN deflection. On the other hand, in term of the motivational account of the FRN, motivational significance of the outcome could explain the FRN discrepancy toward reward and non-reward in risky decision-making [12, 14, 15] , and outcomes that are more motivationally significant would lead to enhanced FRN discrepancy, termed as differentiated FRN (d-FRN) which is the FRN elicited by losses (or negative feedback) minus that elicited by gains (or positive feedback). For instance, in one of our recent studies [16], we investigated how interpersonal relationship modulates people's empathic responses to others' financial gains and losses. We observed that, when subjects passively observed others executing the gambling task, d-FRN elicited toward their friends' gain loss discrepancy was enhanced than that toward strangers, which is consistent with the increased motivational relevance exerted toward the socially closer counterparts. In a similar manner, in the current study, compared with the non-promise condition, outcomes in the promise condition bear more motivational significance to investors, and they are more concerned with whether the promise was kept or broken in the latter case. Therefore, we would predict that promise might enlarge the amplitude of the FRN discrepancy at the feedback stage. Methods Eighteen healthy, right-handed subjects aged 18-26 years (M = 22.65 years, SD = 2.29 years) participated in this study, 11 of which were male. Subjects in the present study were registered students of Zhejiang University who had normal or corrected-to-normal vision, all of whom reported no history of neurological disorders or mental diseases. This study was approved by the Institutional Review Board of Zhejiang University Neuromanagement Lab. Written informed consent forms were obtained from all subjects before the implementation of the experiment. Data from one subject was discarded because of excessive recording artifacts. Thus, data from 17 valid subjects went into the final data analysis. The subjects were comfortably seated in a dimly lit, sound-attenuated and electrically shielded room. The stimuli were in text format, and were designed and presented using E-Prime software package (Psychology Software Tools, Pittsburgh, PA, USA). They were presented at the center of a computer screen at a distance of 100 cm with a visual angle of 8.69°×6.52°(15.2 cm × 11.4 cm, width × height). Subjects were instructed to use the keypad to make their choices. The experiment consisted of 4 blocks, each containing 60 trials. Participants were informed the rules of promise-TG before the experiment started. They were convinced that we had collected the promise and reciprocation decisions from 240 anonymous partners in a behavioral study and they would play non-real time with them in the ERP experiment. In order to guarantee that the act of giving a promise offers no valuable prediction for the trustees' reciprocation, reward/nonreward outcomes were determined in a pseudorandom order, with half of the trials returning the favor while the other half in a non-reciprocity manner in both promise and nonpromise conditions. Prior to the experiment, participants were informed that they would take part in a game in which they would make a decision of whether or not to cooperate with anonymous partners. The partners were given the opportunity to make a promise to the participants before the investment decisions were made. According to our manipulation, half of the partners made promises while the others did not leave messages to the subjects. On each trial, participants were given ¥2, and they could either keep it all or invest it all. In the latter case, the partner would receive ¥10. Then the anonymous partners could either keep the entire ¥10 or give half of it (¥5) back to participants. The total amount of money could be increased by such investment behaviors. Participants were offered remuneration equal to the amount accumulated at the end of the game. As illustrated in Fig. 1 , each trial was initiated by a fixation presented for 1000 ms on the blank screen, which indicated the beginning of each trial. In order to convince the subjects that they were playing with real people instead of computers, the given name of the partner was presented. Then, the partners' message was presented for 1000 ms. If the partner made a promise, then "I promise to give back half of the money." would appear on the screen. Otherwise, "The other person did not leave a message." would be shown. Subsequently, two boxes showing "invest 2" or "keep 2" were presented, and participants were required to choose one of the two boxes by pressing the "1" or "3" key on the keypad. Half of the participants were instructed to press the "1" key if they wanted to invest ¥2, and to press the "3" key if they wanted to keep the money. For the remaining participants, the response pattern was reversed. Once the decision had been made, the chosen box would be highlighted to emphasize the choice. If the participant invested ¥2, after a 800-1000 ms blank screen, a feedback stimulus of either "You got ¥5." (indicating that the partner returned ¥5 to the participant), or "You got ¥0." (indicating that the partner kept the entire ¥10) would be presented in the center of the screen for 1000 ms. If the participant did not invest in the partner and kept ¥2, no additional feedback information would be given. Each feedback stimulus was followed by 800 ms of blank screen, and then another trial would start. To familiarize participants with the task, the experiment started with 4 practice trials. Before the end of the experiment, in order to test whether promise would modulate participants' expectations toward reciprocity of the partner, they were asked to judge the extent to which they expect their partners to return monetary rewards in both the promise and non-promise conditions using 7-point Likert scale (from 1 = "not likely at all" to 7 = "very likely"). After that, participants were debriefed and paid accordingly. EEGs were recorded (band-pass 0.05 Hz to 70 Hz, sampling rate 500 Hz) from 64 scalp sites with Neuroscan Synamp2 Amplifier. The left mastoid served as on-line reference. EEGs were off-line re-referenced to the average of the left and the right mastoids. The electrode on the cephalic region was applied as ground. Vertical Electrooculogram (EOG) was recorded supra and infra-orbitally at the left eye, while horizontal EOG was recorded at the left versus right orbital rim. Electrode impedance was maintained below 5 kO during the experiment. For the behavioral data, numbers of "invest" and "keep" choices for both promise and nonpromise conditions were calculated. Paired t-test was conducted for the comparison of investment rate as well as reciprocity expectation across the two experimental conditions. In addition, two-tailed Pearson correlation was carried out between investment rate and reciprocity expectation on the individual level. For the ERP data, during the offline EEGs analysis, ocular artifacts were removed, which is followed by digital filtering through a zero phase shift (low pass at 30 Hz, 24 dB/octave). Time window of 200 ms before and 800 ms after stimulus presentation was segmented, and the whole epoch was baseline-corrected by the 200 ms interval prior to stimulus onset. Trials containing amplifier clipping, bursts of electromyography activity, or peak-to-peak deflection exceeding ±80 μV were excluded. For each subject, recorded EEGs were separately averaged in each condition over each recording site. Specifically, EEG epochs were separately averaged for outcome (reward/non-reward) × commitment (promise/non-promise) conditions, which resulted in a total of four conditions. Sufficient number of events went into following ERP analysis, with a minimum of 33 valid trials per condition. Considering that the maximal FRN amplitudes appeared at frontal sites, data from the electrodes F1, Fz, F2, FC1, FCz and FC2 were analyzed. Mean amplitudes in the 260-340 ms time window post-onset of feedback, defined through visual inspection of the d-FRN, went into a 2 (outcome) × 2 (commitment) × 6 (electrode) repeated measures ANOVA. Simple effect analysis was conducted when the interaction effect achieved significance. Greenhouse-Geisser correction was applied in all statistical analyses when necessary. Results The paired t-test showed that participants made more cooperative choices in the promise condition than in the non-promise condition [M promise = 76.21% (SD = 0.11), M non-promise = 61.15% (SD = 0.13), t (16) = 3.828, p = 0.001]. Furthermore, the behavioral rating for expected reciprocity of partners showed that reward expectation was higher in the promise condition than in the non-promise condition [M promise = 5.24 (SD = 0.90), M non-promise = 3.65 (SD = 1.17), t (16) = 4.359, p < 0.001]. Correlation analysis was carried out between the two behavioral indicators of investment rate and reciprocity expectation, which were significantly correlated in the promise condition, with a two-tailed Pearson correlation level of 0.499 (p = 0.041), but not in the nonpromise condition (two-tailed Pearson correlation = 0.099, p = 0.705). Outcome evaluation is mainly reflected in FRN. As presented in Fig. 2 , ANOVA analysis for the FRN revealed main effects of outcome (F 1, 16 = 15.311; p = 0.001) and electrode (F 5, 80 = 15.796; p < 0.001), while the main effect of commitment was not significant (F 1, 16 = 1.044; p = 0.322). The interaction effect of outcome and commitment was also significant (F 1, 16 = 6.805; p = 0.019). Data of FCz went into simple effect analysis, because it was reported to show the largest FRN amplitude in most previous literatures [8] . FRN was discrepant between reward and non-reward both in the promise condition (F 1, 16 = 23.447; p < 0.001) and the non-promise condition (F 1, 16 = 6.042; p = 0.026). We further examined the commitment effect in the reward and non-reward conditions respectively. Importantly, this effect was found to be significant in the non-reward condition (F 1, 16 = 7.873; p = 0.013), but not in the reward condition (F 1, 16 = 0.018; p = 0.896). Long before complex social and legal systems came into existence, promise had become one of the most effective means to establish social contracts in the society, which orally guaranteed the occurrence of certain acts afterwards. Generally speaking, since keeping one's word is regarded as a potent social norm, promise transfers the information of a person's trustworthiness. When a promise is given, positive expectation toward that person is generally formed. Thus, when a promise is not fulfilled eventually, it is not only a breach of trust and expectation, but also a violation of one of the most fundamental social norms cherished by the whole society. In the present study, we adopted the promise-TG to examine the influence of promise on trust and cooperative behavior. During the message-leaving stage, trustees might make nonbinding promises of returns contingent on the investment of investors, and investors could decide whether to trust the trustees using promise as a clue. It is worth noticing that promise does not guarantee reciprocation, and the trustee might either honor the promise or take advantage of the device to give a promise first and then break it. This manipulation made it possible for us to examine the influence of promise on investors' cooperative behaviors as well as their neural responses to the trustees' promise keeping and promise breaking. Trust, defined as the willingness to accept vulnerability based on positive expectations about another's behavior [17] , is behaviorally measured as the investment rate in the current experiment. On the aggregate level, investors demonstrated a stronger investment inclination toward trustees who gave promises, which replicated previous findings [1, 18] . This indicated that investors expected them to be more likely to return the monetary reward relative to those who did not gave promises. The additional reciprocity expectation rating further supported this interpretation by showing that the level of reward expectation was higher in the promise condition than in the non-promise condition. On the individual level, investors who had higher reciprocity expectation in the promise condition also invested more. Based on these evidences, we can infer that promise is an important tool for trust development in our daily life. At the neural level, we found a general FRN effect for reciprocity versus non-reciprocity comparison, which is consistent with many previous findings that non-reward outcome elicited a larger deflection of FRN compared with reward outcome [8] . Importantly, promise amplified the size of the FRN effect. Specifically, the modulation effect of promise took place mainly on ERP responses to promise breaking, but not on those to fulfillment of commitment.
The decision-making in the "trust game" REF has been used to examine human-human trust level.
12224773
You Have My Word: Reciprocity Expectation Modulates Feedback-Related Negativity in the Trust Game
{ "venue": "PLoS ONE", "journal": "PLoS ONE", "mag_field_of_study": [ "Psychology", "Medicine" ] }
Abstract. Lossy Trapdoor Functions (LTDFs), introduced by Peikert and Waters (STOC 2008) have been useful for building many cryptographic primitives. In particular, by using an LTDF that loses a (1 − 1/ω(log n)) fraction of all its input bits, it is possible to achieve CCA security using the LTDF as a black-box. Unfortunately, not all candidate LTDFs achieve such a high level of lossiness. In this paper we drastically improve upon previous results and show that an LTDF that loses only a non-negligible fraction of a single bit can be used in a black-box way to build numerous cryptographic primitives, including oneway injective trapdoor functions, CPA secure public-key encryption (PKE), and CCA-secure PKE. We then describe a novel technique for constructing such slightly-lossy LTDFs and give a construction based on modular squaring.
Mol and Yilek showed that slightly LTDFs are sufficient for constructing a IND-CCA secure public-key encryption scheme REF .
18858026
Chosen-ciphertext security from slightly lossy trapdoor functions
{ "venue": "In Public Key Cryptography — PKC 2010, volume ???? of Springer LNCS", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
We propose a technique for improving the quality of phrase-based translation systems by creating synthetic translation options-phrasal translations that are generated by auxiliary translation and postediting processes-to augment the default phrase inventory learned from parallel data. We apply our technique to the problem of producing English determiners when translating from Russian and Czech, languages that lack definiteness morphemes. Our approach augments the English side of the phrase table using a classifier to predict where English articles might plausibly be added or removed, and then we decode as usual. Doing so, we obtain significant improvements in quality relative to a standard phrase-based baseline and to a to post-editing complete translations with the classifier.
REF create synthetic translation options to augment the phrase-table.
14918307
Generating English Determiners in Phrase-Based Translation with Synthetic Translation Options
{ "venue": "Workshop on Statistical Machine Translation", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
Neural machine translation is a recently proposed approach to machine translation. Unlike the traditional statistical machine translation, the neural machine translation aims at building a single neural network that can be jointly tuned to maximize the translation performance. The models proposed recently for neural machine translation often belong to a family of encoder-decoders and encode a source sentence into a fixed-length vector from which a decoder generates a translation. In this paper, we conjecture that the use of a fixed-length vector is a bottleneck in improving the performance of this basic encoder-decoder architecture, and propose to extend this by allowing a model to automatically (soft-)search for parts of a source sentence that are relevant to predicting a target word, without having to form these parts as a hard segment explicitly. With this new approach, we achieve a translation performance comparable to the existing state-of-the-art phrase-based system on the task of English-to-French translation. Furthermore, qualitative analysis reveals that the (soft-)alignments found by the model agree well with our intuition.
REF introduced a general attention mechanism into machine translation model which allow the model to automatically search for parts of the correlative words.
11212020
Neural Machine Translation by Jointly Learning to Align and Translate
{ "venue": "ICLR 2015", "journal": "arXiv: Computation and Language", "mag_field_of_study": [ "Computer Science", "Mathematics" ] }
Attributes are an intermediate representation, which enables parameter sharing between classes, a must when training data is scarce. We propose to view attribute-based image classification as a label-embedding problem: each class is embedded in the space of attribute vectors. We introduce a function which measures the compatibility between an image and a label embedding. The parameters of this function are learned on a training set of labeled samples to ensure that, given an image, the correct classes rank higher than the incorrect ones. Results on the Animals With Attributes and Caltech-UCSD-Birds datasets show that the proposed framework outperforms the standard Direct Attribute Prediction baseline in a zero-shot learning scenario. The label embedding framework offers other advantages such as the ability to leverage alternative sources of information in addition to attributes (e.g. class hierarchies) or to transition smoothly from zero-shot learning to learning with large quantities of data.
REF proposes a label-embedding model for attribute-based zero-shot classification.
8288863
Label-Embedding for Attribute-Based Classification
{ "venue": "2013 IEEE Conference on Computer Vision and Pattern Recognition", "journal": "2013 IEEE Conference on Computer Vision and Pattern Recognition", "mag_field_of_study": [ "Computer Science" ] }
Long Range (LoRa) is a popular technology used to construct Low-Power Wide-Area Network (LPWAN) networks. Given the popularity of LoRa it is likely that multiple independent LoRa networks are deployed in close proximity. In this situation, neighbouring networks interfere and methods have to be found to combat this interference. In this paper we investigate the use of directional antennae and the use of multiple base stations as methods of dealing with internetwork interference. Directional antennae increase signal strength at receivers without increasing transmission energy cost. Thus, the probability of successfully decoding the message in an interference situation is improved. Multiple base stations can alternatively be used to improve the probability of receiving a message in a noisy environment. We compare the effectiveness of these two approaches via simulation. Our findings show that both methods are able to improve LoRa network performance in interference settings. However, the results show that the use of multiple base stations clearly outperforms the use of directional antennae. For example, in a setting where data is collected from 600 nodes which are interfered by four networks with 600 nodes each, using three base stations improves the Data Extraction Rate (DER) from 0.24 to 0.56 while the use of directional antennae provides an increase to only 0.32.
Voigt et al. REF have investigated the use of directional antennas and the addition of new LoRa base stations on the reduction of packet loss due to LoRa inter-network interference.
9631183
Mitigating Inter-network Interference in LoRa Networks
{ "venue": "ArXiv", "journal": "ArXiv", "mag_field_of_study": [ "Computer Science" ] }
The nonnegative rank of a nonnegative matrix is the minimum number of nonnegative rank-one factors needed to reconstruct it exactly. The problem of determining this rank and computing the corresponding nonnegative factors is difficult; however it has many potential applications, e.g., in data mining, graph theory and computational geometry. In particular, it can be used to characterize the minimal size of any extended reformulation of a given combinatorial optimization program. In this paper, we introduce and study a related quantity, called the restricted nonnegative rank. We show that computing this quantity is equivalent to a problem in polyhedral combinatorics, and fully characterize its computational complexity. This in turn sheds new light on the nonnegative rank problem, and in particular allows us to provide new improved lower bounds based on its geometric interpretation. We apply these results to slack matrices and linear Euclidean distance matrices and obtain counter-examples to two conjectures of Beasly and Laffey, namely we show that the nonnegative rank of linear Euclidean distance matrices is not necessarily equal to their dimension, and that the rank of a matrix is not always greater than the nonnegative rank of its square.
Recently, the authors of this paper obtained a related result on the restricted nonnegative rank, a notion introduced and studied by Gillis and Glineur REF .
56422797
On the Geometric Interpretation of the Nonnegative Rank
{ "venue": "Linear Algebra and its Applications 437 (11), pp. 2685-2712, 2012", "journal": null, "mag_field_of_study": [ "Mathematics" ] }
The widespread availability of 802.11-based hardware has made it the premier choice of both researchers and practitioners for developing new wireless networks and applications. However, the ever increasing set of demands posed by these applications is stretching the 802.11 MAC protocol beyond its intended capabilities. For example, 802.11 provides no control over allocation of resources, and the default allocation policy is ill-suited for heterogeneous environments and multi-hop networks. Fairness problems are further exacerbated in multi-hop networks due to link asymmetry and hidden terminals. In this paper, we take a first step towards addressing these problems without replacing the MAC layer by presenting the design and the implementation of an Overlay MAC Layer (OML), that works on top of the 802.11 MAC layer. OML uses loosely-synchronized clocks to divide the time in to equal size slots, and employs a distributed algorithm to allocate these slots among competing nodes. We have implemented OML in both a simulator and on a wireless testbed using the Click modular router. Our evaluation shows that OML can not only provide better flexibility but also improve the fairness, throughput and predictability of 802.11 networks.
A loosely synchronized Overlay MAC Layer (OML) has been used in REF in order to improve the performance of 802.11 networks.
2065177
An overlay MAC layer for 802.11 networks
{ "venue": "MobiSys '05", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
A wide variety of deep neural applications increasingly rely on the cloud to perform their compute-heavy inference. This common practice requires sending private and privileged data over the network to remote servers, exposing it to the service provider and potentially compromising its privacy. Even if the provider is trusted, the data can still be vulnerable over communication channels or via side-channel attacks in the cloud. To that end, this paper aims to reduce the information content of the communicated data with as little as possible compromise on the inference accuracy by making the sent data noisy. An undisciplined addition of noise can significantly reduce the accuracy of inference, rendering the service unusable. To address this challenge, this paper devises Shredder, an end-to-end framework, that, without altering the topology or the weights of a pre-trained network, learns additive noise distributions that significantly reduce the information content of communicated data while maintaining the inference accuracy. The key idea is finding the additive noise distributions by casting it as a disjoint offline learning process with a loss function that strikes a balance between accuracy and information degradation. The loss function also exposes a knob for a disciplined and controlled asymmetric trade-off between privacy and accuracy. While keeping the DNN intact, Shredder divides inference between the cloud and the edge device, striking a balance between computation and communication. In the separate phase of inference, the edge device takes samples from the Laplace distributions that were collected during the proposed offline learning phase and populates a noise tensor with these sampled elements. Then, the edge device merely adds this populated noise tensor to the intermediate results to be sent to the cloud. As such, Shredder enables accurate inference on noisy intermediate data without the need to update the model or the cloud, or any training process during inference. We also formally prove that Shredder maximizes privacy with minimal impact on DNN accuracy while the tradeoff between privacy and accuracy is controlled through a mathematical knob. Experimentation with six real-world DNNs from text processing and image classification shows that Shredder reduces the mutual information between the input and the communicated data to the cloud by 74.70% compared to the original execution while only sacrificing 1.58% loss in accuracy. On average, Shredder also offers a speedup of 1.79× over Wi-Fi and 2.17× over LTE compared to cloud-only execution when using an off-the-shelf mobile GPU (Tegra X2) on the edge.
Shredder REF partitions the DNN models between the edge and the cloud, and adds learned noise to the intermediate data at the edge before offloading to the cloud.
203593417
Shredder: Learning Noise Distributions to Protect Inference Privacy
{ "venue": null, "journal": "Proceedings of the Twenty-Fifth International Conference on Architectural Support for Programming Languages and Operating Systems", "mag_field_of_study": [ "Computer Science" ] }
This paper discusses a node selection problem for bearings-only tracking in wireless sensor networks (WSNs). Saving energy and prolonging the lifetime of the network are the research focuses due to the severely constrained resource of WSNs. An energyefficient network management strategy is necessary to achieve good tracking performance at low cost. In this paper, an energyefficient node selection algorithm for bearings-only sensors in decentralized sensor networks is proposed. The residual energy of a node is incorporated into the objective function of node selection. A new criterion of node selection is also made to coordinate with the objective function. Compared with the other common methods, the proposed method can reduce the cost of the entire network, balance nodes energy expenditure, and extend the lifetime of network. Simulation results prove the effectiveness of the proposed method and show good performance in tracking accuracy and energy consumption. consume extra energy because of selecting cluster members and controlling the tracking, and the node selection also neglects the angular diversity of nodes. In [12], a user selection scheme is presented to minimize the overhead energy consumed by cooperative spectrum sensing in a cognitive radio sensor network. This method can conserve energy and achieve reasonably acceptable spectrum sensing accuracy, but it only focuses on the sensor node with a cognitive radio. Zhao et al. [13] propose an informationdriven sensor querying (IDSQ) and data routing approach, which employs a mixture of both information gain and cost as the objective function for node selection. This method introduces Mahalanobis distance as a measure of information utility and defines an entropy based information-theoretic measure. However, Mahalanobis distance can be only applied to the range sensors, and the entropy is difficult to compute in practice. Besides, an entropy-based sensor selection heuristic approach is presented in [14] . Although it shows good accuracy for target localization, this approach has high computational complexity. More details and algorithms about node selection can be found in [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] . This paper works at a distributed network consisting of bearings-only sensor nodes, that is, microphone sensors arrays, which give direction of arrival (DOA) estimations for tracking [26] . In the distributed system, there is no processing center for nodes to send their observations [27] . All information is also processed locally on nodes. The distributed network is considered in this paper for the following reasons. On one hand, the distributed system is insensitive to the changes of the network topology by nodes addition, remove, or failure. On the other hand, the transmission range is much shorter than the centralized network, thereby saving energy and cost. In this paper, we propose an energy-efficient node selection algorithm for bearings-only target tracking. The goal of our algorithm is to provide greatest improvement to tracking accuracy at the lowest cost and balance the energy consumption between nodes. Therefore, we redefine the information utility brought by nodes, incorporate the residual energy of nodes into NSP, and make a new criterion for node selection. The rest of this paper is organized as follows. Section 2 introduces the system model, the decentralized extended Kalman filter (DEKF), and the foundation for node selection. In Section 3, the proposed method and other common methods (ANS, GNS, and CLT) are described in detail. The simulation results are given in Section 4. Finally, Section 5 concludes the paper.
In REF , an energy-efficient node selection algorithm for bearingsonly sensors was proposed.
29067271
An Energy-Efficient Node Selection Algorithm in Bearings-Only Target Tracking Sensor Networks
{ "venue": null, "journal": "International Journal of Distributed Sensor Networks", "mag_field_of_study": [ "Computer Science" ] }
This paper presents a constructioninspecific model of multiword expression decomposability based on latent semantic analysis. We use latent semantic analysis to determine the similarity between a multiword expression and its constituent words, and claim that higher similarities indicate greater decomposability. We test the model over English noun-noun compounds and verb-particles, and evaluate its correlation with similarities and hyponymy values in WordNet. Based on mean hyponymy over partitions of data ranked on similarity, we furnish evidence for the calculated similarities being correlated with the semantic relational content of WordNet.
Other approaches use Latent Semantic Analysis (LSA) to determine the similarity between a potential idiom and its components REF .
1695436
An Empirical Model Of Multiword Expression Decomposability
{ "venue": "Workshop On Multiword Expressions: Analysis, Acquisition And Treatment", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
We introduce a weighted version of the ranking algorithm by Karp et al. (STOC 1990), and we prove a competitive ratio of 0.6534 for the vertex-weighted online bipartite matching problem when online vertices arrive in random order. Our result shows that random arrivals help beating the 1-1/e barrier even in the vertexweighted case. We build on the randomized primal-dual framework by Devanur et al. (SODA 2013) and design a two dimensional gain sharing function, which depends not only on the rank of the offline vertex, but also on the arrival time of the online vertex. To our knowledge, this is the first competitive ratio strictly larger than 1-1/e for an online bipartite matching problem achieved under the randomized primal-dual framework. Our algorithm has a natural interpretation that offline vertices offer a larger portion of their weights to the online vertices as time increases, and each online vertex matches the neighbor with the highest offer at its arrival.
Recently, Huang et al. REF generalize the randomized online primal dual framework to handle random arrivals and give a generalization of Ranking that is 0.653-competitive in the vertex-weighted case.
196834475
Online Vertex-Weighted Bipartite Matching: Beating 1-1/e with Random Arrivals
{ "venue": "TALG", "journal": null, "mag_field_of_study": [ "Mathematics", "Computer Science" ] }
Abstract Hard real-time systems must obey strict timing constraints. Therefore, one needs to derive guarantees on the worst-case execution times of a system's tasks. In this context, predictable behavior of system components is crucial for the derivation of tight and thus useful bounds. This paper presents results about the predictability of common cache replacement policies. To this end, we introduce three metrics, evict, fill, and mls that capture aspects of cache-state predictability. A thorough analysis of the LRU, FIFO, MRU, and PLRU policies yields the respective values under these metrics. To the best of our knowledge, this work presents the first quantitative, analytical results for the predictability of replacement policies. Our results support empirical evidence in static cache analysis.
REF analyzed the predictability of different cache replacement policies.
12858787
Timing predictability of cache replacement policies
{ "venue": "Real-Time Systems", "journal": "Real-Time Systems", "mag_field_of_study": [ "Computer Science" ] }
Abstract-Educational organizations are one of the important parts of our society and playing a vital role for growth and development of any nation. Data Mining is an emerging technique with the help of this one can efficiently learn with historical data and use that knowledge for predicting future behavior of concern areas. Growth of current education system is surely enhanced if data mining has been adopted as a futuristic strategic management tool. The Data Mining tool is able to facilitate better resource utilization in terms of student performance, course development and finally the development of nation's education related standards. In this paper a student data from a community college database has been taken and various classification approaches have been performed and a comparative analysis has been done. In this research work Support Vector Machines (SVM) are established as a best classifier with maximum accuracy and minimum root mean square error (RMSE). The study also includes a comparative analysis of all Support Vector Machine Kernel types and in this the Radial Basis Kernel is identified as a best choice for Support Vector Machine. A Decision tree approach is proposed which may be taken as an important basis of selection of student during any course program. The paper is aimed to develop a faith on Data Mining techniques so that present education and business system may adopt this as a strategic management tool.
In research work of authors REF Support Vector Machines (SVM) are constructed with minimum root mean square error (RMSE) and better accuracy.
17706267
Data Mining in Education: Data Classification and Decision Tree Approach
{ "venue": null, "journal": "International Journal of e-Education, e-Business, e-Management and e-Learning", "mag_field_of_study": [ "Computer Science" ] }
Recent work has shown that optical flow estimation can be formulated as a supervised learning task and can be successfully solved with convolutional networks. Training of the so-called FlowNet was enabled by a large synthetically generated dataset. The present paper extends the concept of optical flow estimation via convolutional networks to disparity and scene flow estimation. To this end, we propose three synthetic stereo video datasets with sufficient realism, variation, and size to successfully train large networks. Our datasets are the first large-scale datasets to enable training and evaluating scene flow methods. Besides the datasets, we present a convolutional network for real-time disparity estimation that provides state-of-the-art results. By combining a flow and disparity estimation network and training it jointly, we demonstrate the first scene flow estimation with a convolutional network.
Renderings of entire synthetic environments have been proposed for training convolutional networks for stereo disparity and scene flow estimation REF .
206594275
A Large Dataset to Train Convolutional Networks for Disparity, Optical Flow, and Scene Flow Estimation
{ "venue": "ArXiv", "journal": "ArXiv", "mag_field_of_study": [ "Computer Science", "Mathematics" ] }
Abstract-Network intrusion detection systems (NIDSs) monitor network traffic for suspicious activity and alert the system or network administrator. With the onset of gigabit networks, current generation networking components for NIDS will soon be insufficient for numerous reasons; most notably because the existing methods cannot support high-performance demands. Field-programmable gate arrays (FPGAs) are an attractive medium to handle both high throughput and adaptability to the dynamic nature of intrusion detection. In this work, we design an FPGA-based architecture for anomaly detection in network transmissions. We first develop a feature extraction module (FEM) which aims to summarize network information to be used at a later stage. Our FPGA implementation shows that we can achieve significant performance improvements compared to existing software and application-specific integrated-circuit implementations. Then, we go one step further and demonstrate the use of principal component analysis as an outlier detection method for NIDSs. The results show that our architecture correctly classifies attacks with detection rates exceeding 99% and false alarms rates as low as 1.95%. Moreover, using extensive pipelining and hardware parallelism, it can be shown that for realistic workloads, our architectures for FEM and outlier analysis achieve 21.25-and 23.76-Gb/s core throughput, respectively. Index Terms-Feature extraction, field-programmable gate arrays (FPGA), network intrusion detection system (NIDS), principal component analysis (PCA).
Abhishek et al designed an FPGA-based architecture for anomaly detection in network transmissions REF .
1072982
An FPGA-Based Network Intrusion Detection Architecture
{ "venue": "IEEE Transactions on Information Forensics and Security", "journal": "IEEE Transactions on Information Forensics and Security", "mag_field_of_study": [ "Computer Science" ] }
Abstract-Detecting small obstacles on the road ahead is a critical part of the driving task which has to be mastered by fully autonomous cars. In this paper, we present a method based on stereo vision to reliably detect such obstacles from a moving vehicle. The proposed algorithm performs statistical hypothesis tests in disparity space directly on stereo image data, assessing freespace and obstacle hypotheses on independent local patches. This detection approach does not depend on a global road model and handles both static and moving obstacles. For evaluation, we employ a novel lost-cargo image sequence dataset comprising more than two thousand frames with pixelwise annotations of obstacle and free-space and provide a thorough comparison to several stereo-based baseline methods. The dataset will be made available to the community to foster further research on this important topic 4 . The proposed approach outperforms all considered baselines in our evaluations on both pixel and object level and runs at frame rates of up to 20 Hz on 2 mega-pixel stereo imagery. Small obstacles down to the height of 5 cm can successfully be detected at 20 m distance at low false positive rates.
The work by Pinggera et al. REF performs statistical hypothesis tests in disparity space directly on stereo image data, assessing free space and obstacle hypotheses on independent local patches.
1236166
Lost and Found: detecting small road hazards for self-driving vehicles
{ "venue": "2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)", "journal": "2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)", "mag_field_of_study": [ "Computer Science" ] }
The traditional Low Energy Adaptive Cluster Hierarchy (LEACH) routing protocol is a clustering-based protocol. The uneven selection of cluster heads results in premature death of cluster heads and premature blind nodes inside the clusters, thus reducing the overall lifetime of the network. With a full consideration of information on energy and distance distribution of neighboring nodes inside the clusters, this paper proposes a new routing algorithm based on differential evolution (DE) to improve the LEACH routing protocol. To meet the requirements of monitoring applications in outdoor environments such as the meteorological, hydrological and wetland ecological environments, the proposed algorithm uses the simple and fast search features of DE to optimize the multi-objective selection of cluster heads and prevent blind nodes for improved energy efficiency and system stability. Simulation results show that the proposed new LEACH routing algorithm has better performance, effectively extends the working lifetime of the system, and improves the quality of the wireless sensor networks.
An improved version of LEACH is presented in REF to improve energy efficiency and system stability by using genetic algorithm (GA) during the selection of cluster heads.
16586169
A Differential Evolution-Based Routing Algorithm for Environmental Monitoring Wireless Sensor Networks
{ "venue": "Sensors (Basel, Switzerland)", "journal": "Sensors (Basel, Switzerland)", "mag_field_of_study": [ "Engineering", "Medicine", "Computer Science" ] }
Abstract-The main difference between the wireless underground sensor networks (WUSNs) and the terrestrial wireless sensor networks is the signal propagation medium. The underground is a challenging environment for wireless communications since the propagation medium is no longer air but soil, rock and water. The well established wireless signal propagation techniques using electromagnetic (EM) waves do not work well in this environment due to three problems: high path loss, dynamic channel condition and large antenna size. New techniques using magnetic induction (MI) create constant channel condition and can accomplish the communication with small size coils. In this paper, detailed analysis on the path loss and the bandwidth of the MI system in underground soil medium is provided. Based on the channel analysis, the MI waveguide technique for communication is developed in order to reduce the high path loss of the traditional EM wave system and the ordinary MI system. The performance of the EM wave system, the ordinary MI system and our improved MI waveguide system are quantitatively compared. The results reveal that the transmission range of the MI waveguide system is dramatically increased. Index Terms-Channel modeling, magnetic induction (MI), MI waveguide technique, underground communication, wireless sensor networks.
The path loss of the direct MI and the MI waveguide were analysed in detail in REF , which is an important contribution to the MI communication.
9782003
Magnetic Induction Communications for Wireless Underground Sensor Networks
{ "venue": "IEEE Transactions on Antennas and Propagation", "journal": "IEEE Transactions on Antennas and Propagation", "mag_field_of_study": [ "Physics" ] }
Abstract. We present a novel leak detection algorithm. To prove the absence of a memory leak, the algorithm assumes its presence and runs a backward heap analysis to disprove this assumption. We have implemented this approach in a memory leak analysis tool and used it to analyze several routines that manipulate linked lists and trees. Because of the reverse nature of the algorithm, the analysis can locally reason about the absence of memory leaks. We have also used the tool as a scalable, but unsound leak detector for C programs. The tool has found several bugs in larger programs from the SPEC2000 suite.
Orlovich and Rugina REF proposed a backward dataflow analysis to detect memory leaks.
7349726
Memory leak analysis by contradiction
{ "venue": "In Proceedings of the 13th International Static Analysis Symposium", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
This paper proposes a low-complexity word-level deep convolutional neural network (CNN) architecture for text categorization that can efficiently represent longrange associations in text. In the literature, several deep and complex neural networks have been proposed for this task, assuming availability of relatively large amounts of training data. However, the associated computational complexity increases as the networks go deeper, which poses serious challenges in practical applications. Moreover, it was shown recently that shallow word-level CNNs are more accurate and much faster than the state-of-the-art very deep nets such as character-level CNNs even in the setting of large training data. Motivated by these findings, we carefully studied deepening of word-level CNNs to capture global representations of text, and found a simple network architecture with which the best accuracy can be obtained by increasing the network depth without increasing computational cost by much. We call it deep pyramid CNN. The proposed model with 15 weight layers outperforms the previous best models on six benchmark datasets for sentiment classification and topic categorization.
Subsequently, deep word-level CNNs have been applied in REF .
29191669
Deep Pyramid Convolutional Neural Networks for Text Categorization
{ "venue": "ACL", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
Abstract. Automatic liver segmentation in 3D medical images is essential in many clinical applications, such as pathological diagnosis of hepatic diseases, surgical planning, and postoperative assessment. However, it is still a very challenging task due to the complex background, fuzzy boundary, and various appearance of liver. In this paper, we propose an automatic and efficient algorithm to segment liver from 3D CT volumes. A deep image-to-image network (DI2IN) is first deployed to generate the liver segmentation, employing a convolutional encoder-decoder architecture combined with multi-level feature concatenation and deep supervision. Then an adversarial network is utilized during training process to discriminate the output of DI2IN from ground truth, which further boosts the performance of DI2IN. The proposed method is trained on an annotated dataset of 1000 CT volumes with various different scanning protocols (e.g., contrast and non-contrast, various resolution and position) and large variations in populations (e.g., ages and pathology). Our approach outperforms the state-of-the-art solutions in terms of segmentation accuracy and computing efficiency.
In REF , a conditional Generative Adversarial Network (cGAN) has been used to segment the human liver in 3D CT images.
8635859
Automatic Liver Segmentation Using an Adversarial Image-to-Image Network
{ "venue": "ArXiv", "journal": "ArXiv", "mag_field_of_study": [ "Computer Science" ] }
: The presented experiment assessed the sense of embodiment when interacting with virtual hands with different levels of realism. (Left) Participants performed a series of pick-and-place tasks avoiding different obstacles (the iconic virtual hand and the "fire" obstacle are depicted). (Right) Additionally, participants performed a task in which the virtual hand was potentially threatened by a spinning saw. How do people appropriate their virtual hand representation when interacting in virtual environments? In order to answer this question, we conducted an experiment studying the sense of embodiment when interacting with three different virtual hand representations, each one providing a different degree of visual realism but keeping the same control mechanism. The main experimental task was a Pick-and-Place task in which participants had to grasp a virtual cube and place it to an indicated position while avoiding an obstacle (brick, barbed wire or fire). An additional task was considered in which participants had to perform a potentially dangerous operation towards their virtual hand: place their virtual hand close to a virtual spinning saw. Both qualitative measures and questionnaire data were gathered in order to assess the sense of agency and ownership towards each virtual hand. Results show that the sense of agency is stronger for less realistic virtual hands which also provide less mismatch between the participant's actions and the animation of the virtual hand. In contrast, the sense of ownership is increased for the human virtual hand which provides a direct mapping between the degrees of freedom of the real and virtual hand.
For example, it was questioned whether the virtual representation of a hand alters the sense of agency REF , which was demonstrated to be more related to the virtual hand control and the task efficiency than to the virtual hand representation.
574978
The role of interaction in virtual embodiment: Effects of the virtual hand representation
{ "venue": "2016 IEEE Virtual Reality (VR)", "journal": "2016 IEEE Virtual Reality (VR)", "mag_field_of_study": [ "Computer Science" ] }
Attributes are visual concepts that can be detected by machines, understood by humans, and shared across categories. They are particularly useful for fine-grained domains where categories are closely related to one other (e.g. bird species recognition). In such scenarios, relevant attributes are often local (e.g. "white belly"), but the question of how to choose these local attributes remains largely unexplored. In this paper, we propose an interactive approach that discovers local attributes that are both discriminative and semantically meaningful from image datasets annotated only with fine-grained category labels and object bounding boxes. Our approach uses a latent conditional random field model to discover candidate attributes that are detectable and discriminative, and then employs a recommender system that selects attributes likely to be semantically meaningful. Human interaction is used to provide semantic names for the discovered attributes. We demonstrate our method on two challenging datasets, Caltech-UCSD Birds-200-2011 and Leeds Butterflies, and find that our discovered attributes outperform those generated by traditional approaches.
In REF Duan et al. propose to use a latent conditional random field to generate localized attributes that are both machine and human friendly.
7708151
Discovering localized attributes for fine-grained recognition
{ "venue": "2012 IEEE Conference on Computer Vision and Pattern Recognition", "journal": "2012 IEEE Conference on Computer Vision and Pattern Recognition", "mag_field_of_study": [ "Computer Science" ] }
Abstract-The IEEE 802.15.4 standard is designed as a low power and low data rate protocol offering high reliability. It defines a beaconed and unbeaconed version. In this work, we analyze the maximum throughput and minimum delay of the unbeaconed or unslotted version of the protocol. First, the most important features are described. Then the exact formula for the throughput and delay of a direct transmission between one sender and one receiver is given. This is done for the different frequency ranges and address structures used in IEEE 802.15.4. The analysis is limited to the unslotted version as this one experiences the lowest overhead. It is shown that the maximum throughput depends on the packet size. In the 2.4 GHz band, a bandwidth efficiency of 64.9% is reached when the maximum packet size is used. Further we describe the influence of the back off interval. A significant gain is found when the backs off parameters are altered. We have measured the throughput experimentally in order to compare the theoretical analysis with real-life examples.
Authors in REF state that the IEEE 802.15.4 standard is designed as a low power and low data rate protocol with high reliability.
14609339
Throughput and Delay Analysis of Unslotted IEEE 802.15.4
{ "venue": "J. Networks", "journal": "J. Networks", "mag_field_of_study": [ "Computer Science" ] }
Abstract-Within the realm of network security, we interpret the concept of trust as a relation among entities that participate in various protocols. Trust relations are based on evidence created by the previous interactions of entities within a protocol. In this work, we are focusing on the evaluation of trust evidence in ad hoc networks. Because of the dynamic nature of ad hoc networks, trust evidence may be uncertain and incomplete. Also, no preestablished infrastructure can be assumed. The evaluation process is modeled as a path problem on a directed graph, where nodes represent entities, and edges represent trust relations. We give intuitive requirements and discuss design issues for any trust evaluation algorithm. Using the theory of semirings, we show how two nodes can establish an indirect trust relation without previous direct interaction. We show that our semiring framework is flexible enough to express other trust models, most notably PGP's Web of Trust. Our scheme is shown to be robust in the presence of attackers.
In REF Theodorakopoulos and Baras propose an algebraic method for decentralised trust evaluation in ad hoc networks.
1924030
On trust models and trust evaluation metrics for ad hoc networks
{ "venue": "IEEE Journal on Selected Areas in Communications", "journal": "IEEE Journal on Selected Areas in Communications", "mag_field_of_study": [ "Computer Science" ] }
ABSTRACT Software-defined network (SDN) provides a solution for the scalable network framework with decoupled control and data plane. Migrating switches can balance the resource utilization of controllers and improve network performance. Switch migration problem has to date been formulated as a resource utilization maximization problem to address the scalability of the control plane. However, this problem is NP-hard with high-computational complexities and without addressing the security challenges of the control plane. In this paper, we propose a switch migration method, which interprets switch migration as a signature matching problem and is formulated as a 3-D earth mover's distance model to protect strategically important controllers in the network. Considering the scalability, we further propose a heuristic method which is time-efficient and suitable to large-scale networks. Simulation results show that our proposed methods can disguise strategically important controllers by diminishing the difference of traffic load between controllers. Moreover, our proposed methods can significantly relieve the traffic pressure of controllers and prevent saturation attacks. INDEX TERMS Earth mover's distance, load balancing, reconnaissance, saturation attacks, switch migration.
Zhou et al. REF design a switch migration scheme, which interprets switch migration as a matching problem and is transformed as a threedimensional Earth Mover's Distance (EMD) model to protect strategically important controllers in SDN network.
3538354
Elastic Switch Migration for Control Plane Load Balancing in SDN
{ "venue": "IEEE Access", "journal": "IEEE Access", "mag_field_of_study": [ "Computer Science" ] }
Abstract. This paper proposes a PBNM (Policy Based Network Management) framework for automating the process of generating and distributing DiffServ configuration to network devices. The framework is based on IETF standards, but introduces a new business level policy model for simplifying the process of defining QoS policies. The framework is defined in three layers: a business level policy model (based on a IETF PCIM extension), a device independent policy model (based on a IETF QPIM extension) and a device dependent policy model (based on the IETF diffserv PIB definition). The paper illustrates the use of the framework by mapping the information models to XML documents. The XML mapped information model supports the reuse of rules, conditions and network information by using XPointer references.
REF propose a framework for defining reusable business-level policies for DiffServ using XML for policy representation.
13114285
Defining Reusable Business-Level QoS Policies for DiffServ
{ "venue": "Proceedings of Distributed Systems Operations and Management WorkShop ,DSOM 2004", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
Abstract-Existing work on privacy-preserving data publishing cannot satisfactorily prevent an adversary with background knowledge from learning important sensitive information. The main challenge lies in modeling the adversary's background knowledge. We propose a novel approach to deal with such attacks. In this approach, one first mines knowledge from the data to be released and then uses the mining results as the background knowledge when anonymizing the data. The rationale of our approach is that if certain facts or background knowledge exist, they should manifest themselves in the data and we should be able to find them using data mining techniques. One intriguing aspect of our approach is that one can argue that it improves both privacy and utility at the same time, as it both protects against background knowledge attacks and better preserves the features in the data. We then present the Injector framework for data anonymization. Injector mines negative association rules from the data to be released and uses them in the anonymization process. We also develop an efficient anonymization algorithm to compute the injected tables that incorporates background knowledge. Experimental results show that Injector reduces privacy risks against background knowledge attacks while improving data utility.
A technique called Injector, by Li and Li REF mines the original data for negative association rules that are then used in the anonymization process.
1270930
Injector: Mining Background Knowledge for Data Anonymization
{ "venue": "2008 IEEE 24th International Conference on Data Engineering", "journal": "2008 IEEE 24th International Conference on Data Engineering", "mag_field_of_study": [ "Computer Science" ] }
Abstract. Multidimensional process mining adopts the concept of data cubes to split event data into a set of homogenous sublogs according to case and event attributes. For each sublog, a separated process model is discovered and compared to other models to identify group-specific differences for the process. Even though it is not time-critical, performance is vital due to the explorative characteristics of the analysis. We propose to adopt well-established approaches from the data warehouse domain based on relational databases to provide acceptable performance. In this paper, we present the underlying relational concepts of PMCube, a datawarehouse-based approach for multidimensional process mining. Based on a relational database schema, we introduce generic query patterns which map OLAP queries to SQL to push the operations (i.e. aggregation and filtering) to the database management system. We evaluate the run-time behavior of our approach by a number of experiments. The results show that our approach provides a significantly better performance than the state-of-the-art for multidimensional process mining and scales up linearly with the number of events.
In REF , the performance of multidimensional process mining (MPM) is improved using relational databases techniques.
16043389
A Relational Data Warehouse for Multidimensional Process Mining
{ "venue": "SIMPDA", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
We present a novel and practical deep fully convolutional neural network architecture for semantic pixel-wise segmentation termed SegNet. This core trainable segmentation engine consists of an encoder network, a corresponding decoder network followed by a pixel-wise classification layer. The architecture of the encoder network is topologically identical to the 13 convolutional layers in the VGG16 network [1] . The role of the decoder network is to map the low resolution encoder feature maps to full input resolution feature maps for pixel-wise classification. The novelty of SegNet lies is in the manner in which the decoder upsamples its lower resolution input feature map(s). Specifically, the decoder uses pooling indices computed in the max-pooling step of the corresponding encoder to perform non-linear upsampling. This eliminates the need for learning to upsample. The upsampled maps are sparse and are then convolved with trainable filters to produce dense feature maps. We compare our proposed architecture with the widely adopted FCN [2] and also with the well known DeepLab-LargeFOV [3], DeconvNet [4] architectures. This comparison reveals the memory versus accuracy trade-off involved in achieving good segmentation performance. SegNet was primarily motivated by scene understanding applications. Hence, it is designed to be efficient both in terms of memory and computational time during inference. It is also significantly smaller in the number of trainable parameters than other competing architectures and can be trained end-to-end using stochastic gradient descent. We also performed a controlled benchmark of SegNet and other architectures on both road scenes and SUN RGB-D indoor scene segmentation tasks. These quantitative assessments show that SegNet provides good performance with competitive inference time and most efficient inference memory-wise as compared to other architectures. We also provide a Caffe implementation of SegNet and a web demo at http://mi.eng.cam.ac.uk/projects/segnet/.
al. REF save the pooling indices in the encoder block and copy them to the corresponding upsampling layer in the decoder block.
206766608
SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation
{ "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "mag_field_of_study": [ "Medicine", "Computer Science" ] }
Abstract-We consider a Web server that can provide differentiated services to clients with different quality of service (QoS) requirements. The Web server can provide N ! 1 classes of proportional-delay differentiated services (PDDS) to heterogeneous clients. An operator can specify fixed performance spacings between classes, namely, r i;iþ1 > 1, for i ¼ 1; . . . ; N À 1. Requests in class i þ 1 are guaranteed to have an average waiting time which is 1=r i;iþ1 of the average waiting time of class i requests. With PDDS, we can provide consistent performance spacings over a wide range of system loading and this simplifies many pricing issues. In addition, each client can specify a maximum average waiting time requirement to be guaranteed by the PDDS-enabled Web server. We show that, in general, the problem of assigning clients to service classes in order to optimize system efficacy is NP-complete. We propose two efficient admission control algorithms so that a Web server can provide the QoS guarantees and, at the same time, classify each client to its "lowest" admissible class, resulting in lowest usage cost for the admitted client. We also consider how to perform end-point dynamic adaptation such that admitted clients can submit requests at a lower class and further reduce their usage costs without violating their QoS requirements. We propose two dynamic adaptation algorithms: one is server-based and the other is client-based. The client-based adaptation is distributed and is based on a noncooperative game technique. We carry out experiments to illustrate the effectiveness of these algorithms under different utility functions and traffic arrival patterns (e.g., Poisson, MMPP, and Pareto). We report extensive experimental results to illustrate the effectiveness of our proposed algorithms.
In REF , the authors proposed admission control algorithms in combination with time-dependent priority scheduling for proportional queueing-delay differentiation on a Web server.
10668809
A proportional-delay DiffServ-enabled Web server: admission control and dynamic adaptation
{ "venue": "IEEE Transactions on Parallel and Distributed Systems", "journal": "IEEE Transactions on Parallel and Distributed Systems", "mag_field_of_study": [ "Computer Science" ] }
It is often useful to know the geographic positions of nodes in a communications network, but adding GPS receivers or other sophisticated sensors to every node can be expensive. We present an algorithm that uses connectivity informationwho is within communications range of whom-to derive the locations of the nodes in the network. The method can take advantage of additional information, such as estimated distances between neighbors or known positions for certain anchor nodes, if it is available. The algorithm is based on multidimensional scaling, a data analysis technique that takes O(n 3 ) time for a network of n nodes. Through simulation studies, we demonstrate that the algorithm is more robust to measurement error than previous proposals, especially when nodes are positioned relatively uniformly throughout the plane. Furthermore, it can achieve comparable results using many fewer anchor nodes than previous methods, and even yields relative coordinates when no anchor nodes are available.
The localization from mere connectivity presents a method to determine the location of nodes in the network by using the connectivity information REF .
252999
Localization from mere connectivity
{ "venue": "MobiHoc '03", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
Abstract. Given one or more images of an object (or a scene), is it possible to synthesize a new image of the same instance observed from an arbitrary viewpoint? In this paper, we attempt to tackle this problem, known as novel view synthesis, by re-formulating it as a pixel copying task that avoids the notorious difficulties of generating pixels from scratch. Our approach is built on the observation that the visual appearance of different views of the same instance is highly correlated. Such correlation could be explicitly learned by training a convolutional neural network (CNN) to predict appearance flows -2-D coordinate vectors specifying which pixels in the input view could be used to reconstruct the target view. We show that for both objects and scenes, our approach is able to generate higher-quality synthesized views with crisp texture and boundaries than previous CNN-based techniques. Fig. 1 . Given an input view of an object (left) or a scene(right), our goal is to synthesize novel views of the same instance corresponding to various camera transformations (Ti). Our approach based on learning appearance flows is able to generate higher-quality results than the previous method that directly outputs pixels in the target view [1] .
Zhou et al. observed that the visual appearance of different views of the same instance is highly correlated, and designed a deep learning algorithm to predict appearance flows that are used to select proper pixels in the input views to synthesize a novel view REF .
6002134
View Synthesis by Appearance Flow
{ "venue": "ArXiv", "journal": "ArXiv", "mag_field_of_study": [ "Computer Science" ] }
Standard artificial neural networks suffer from the well-known issue of catastrophic forgetting, making continual or lifelong learning problematic. Recently, numerous methods have been proposed for continual learning, but due to differences in evaluation protocols it is difficult to directly compare their performance. To enable more meaningful comparisons, we identified three distinct continual learning scenarios based on whether task identity is known and, if it is not, whether it needs to be inferred. Performing the split and permuted MNIST task protocols according to each of these scenarios, we found that regularization-based approaches (e.g., elastic weight consolidation) failed when task identity needed to be inferred. In contrast, generative replay combined with distillation (i.e., using class probabilities as "soft targets") achieved superior performance in all three scenarios. In addition, we reduced the computational cost of generative replay by integrating the generative model into the main model by equipping it with generative feedback connections. This Replay-through-Feedback approach substantially shortened training time with no or negligible loss in performance. We believe this to be an important first step towards making the powerful technique of generative replay scalable to real-world continual learning applications. * Alternative
REF integrate the generative model into the main network by introducing feedback connections that are trained to reconstruct inputs from hidden states, hence, removing the need for a separate generative model.
52880246
Generative replay with feedback connections as a general strategy for continual learning
{ "venue": "ArXiv", "journal": "ArXiv", "mag_field_of_study": [ "Computer Science", "Mathematics" ] }
Abstract-Recent advances in software testing allow automatic derivation of tests that reach almost any desired point in the source code. There is, however, a fundamental problem with the general idea of targeting one distinct test coverage goal at a time: Coverage goals are neither independent of each other, nor is test generation for any particular coverage goal guaranteed to succeed. We present EVOSUITE, a search-based approach that optimizes whole test suites towards satisfying a coverage criterion, rather than generating distinct test cases directed towards distinct coverage goals. Evaluated on five open source libraries and an industrial case study, we show that EVOSUITE achieves up to 18 times the coverage of a traditional approach targeting single branches, with up to 44% smaller test suites.
The work presented by Fraser and Arcuri REF , shows that the whole test suite approach achieves up to 18 times the coverage than the traditional approach which would target coverage goals individually.
1213527
Evolutionary Generation of Whole Test Suites
{ "venue": "2011 11th International Conference on Quality Software", "journal": "2011 11th International Conference on Quality Software", "mag_field_of_study": [ "Computer Science" ] }
This article proposes an approach for the online analysis of accidental faults for real-time embedded systems using hidden Markov models (HMMs). By introducing reasonable and appropriate abstraction of complex systems, HMMs are used to describe the healthy or faulty states of system's hardware components. They are parametrized to statistically simulate the real system's behavior. As it is not easy to obtain rich accidental fault data from a system, the Baum-Welch algorithm cannot be employed here to train the parameters in HMMs. Inspired by the principles of fault tree analysis and the maximum entropy in Bayesian probability theory, we propose to compute the failure propagation distribution to estimate the parameters in HMMs and to adapt the parameters using a backward algorithm. The parameterized HMMs are then used to online diagnose accidental faults using a vote algorithm integrated with a low-pass filter. We design a specific test bed to analyze the sensitivity, specificity, precision, accuracy and F1-score measures by generating a large amount of test cases. The test results show that the proposed approach is robust, efficient and accurate.
In REF , the author detects faults in real-time embedded systems using a HMM through describing the healthy and faulty states of a system's hardware components.
32443197
Online diagnosis of accidental faults for real-time embedded systems using a hidden Markov model
{ "venue": null, "journal": "SIMULATION", "mag_field_of_study": [ "Computer Science" ] }
We introduce a new representation learning approach for domain adaptation, in which data at training and test time come from similar but different distributions. Our approach is directly inspired by the theory on domain adaptation suggesting that, for effective domain transfer to be achieved, predictions must be made based on features that cannot discriminate between the training (source) and test (target) domains. The approach implements this idea in the context of neural network architectures that are trained on labeled data from the source domain and unlabeled data from the target domain (no labeled target-domain data is necessary). As the training progresses, the approach promotes the emergence of features that are (i) discriminative for the main learning task on the source domain and (ii) indiscriminate with respect to the shift between the domains. We show that this adaptation behaviour can be achieved in almost any feed-forward model by augmenting it with few standard layers and a new gradient reversal layer. The resulting augmented architecture can be trained using standard backpropagation and stochastic gradient descent, and can thus be implemented with little effort using any of the deep learning packages. We demonstrate the success of our approach for two distinct classification problems (document sentiment analysis and image classification), where state-of-the-art domain adaptation performance on standard benchmarks is achieved. We also validate the approach for descriptor learning task in the context of person re-identification application.
GRL REF denotes the gradient reversal layer.
2871880
Domain-Adversarial Training of Neural Networks
{ "venue": "Journal of Machine Learning Research 2016, vol. 17, p. 1-35", "journal": null, "mag_field_of_study": [ "Mathematics", "Computer Science" ] }
We use multi-stage programming, monads and Ocaml's advanced module system to demonstrate how to eliminate all abstraction overhead from generic programs while avoiding any inspection of the resulting code. We demonstrate this clearly with Gaussian Elimination as a representative family of symbolic and numeric algorithms. We parameterize our code to a great extent -over domain, input and permutation matrix representations, determinant and rank tracking, pivoting policies, result types, etc. -at no run-time cost. Because the resulting code is generated just right and not changed afterward, MetaOCaml guarantees that the generated code is well-typed. We further demonstrate that various abstraction parameters (aspects) can be made orthogonal and compositional, even in the presence of name-generation for temporaries, and "interleaving" of aspects. We also show how to encode some domain-specific knowledge so that "clearly wrong" compositions can be rejected at or before generation time, rather than during the compilation or running of the generated code.
Carette and Kiselyov REF apply the techniques in the context of homogeneous metaprogramming for eliminating the abstraction overhead from generic code.
7300328
Multi-stage programming with functors and monads: eliminating abstraction overhead from generic code
{ "venue": null, "journal": null, "mag_field_of_study": [ "Computer Science" ] }
In various scenarios, there is a need to expose a certain API to client programs which are not fully trusted. In cases where the client programs need access to sensitive data, confidentiality can be enforced using an information flow policy. This is a general and powerful type of policy that has been widely studied and implemented. Previous work has shown how information flow policy enforcement can be implemented in a lightweight fashion in the form of a library. However, these approaches all suffer from a number of limitations. Often, the policy and its enforcement are not cleanly separated from the underlying API, and the user of the API is exposed to a strongly and unnaturally modified interface. Some of the approaches are limited to functional APIs and have difficulty handling imperative features like I/O and mutable state variables. In addition, this previous work uses classic static information flow enforcement techniques, and does not consider more recent dynamic information flow enforcement techniques. In this paper, we show that information flow policies can be enforced on imperative-style monadic APIs in a modular and reasonably general way with only a minor impact on the interface provided to API users. The main idea of this paper is that we implement the policy enforcement in a monad transformer while the underlying monadic API remains unaware and unmodified. The policy is specified through the lifting of underlying monad operations. We show the generality of our approach by presenting implementations of three important information flow enforcement techniques, including a purely dynamic, a purely static and a hybrid technique. Two of the techniques require the use of a generalisation of the Monad type class, but impact on the API interface stays limited. We show that our technique lends itself to formal reasoning by sketching a proof that our implementation of the static technique is faithful to the original presentation. Finally, we discuss fundamental limitations of our approach and how it fits in general information flow enforcement theory.
This idea was further improved in by implementing the information flow policy enforcement in a monad transformer separated from the underlying monadic API which remains unaware and unmodified REF , where the policy is specified by lifting of the underlying monad operations into the transformed monad.
17189679
Information flow enforcement in monadic libraries
{ "venue": "TLDI '11", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
Genes underlying mutant phenotypes can be isolated by combining marker discovery, genetic mapping and resequencing, but a more straightforward strategy for mapping mutations would be the direct comparison of mutant and wild-type genomes. Applying such an approach, however, is hampered by the need for reference sequences and by mutational loads that confound the unambiguous identification of causal mutations. Here we introduce NIKS (needle in the k-stack), a reference-free algorithm based on comparing k-mers in whole-genome sequencing data for precise discovery of homozygous mutations. We applied NIKS to eight mutants induced in nonreference rice cultivars and to two mutants of the nonmodel species Arabis alpina. In both species, comparing pooled F 2 individuals selected for mutant phenotypes revealed small sets of mutations including the causal changes. Moreover, comparing M 3 seedlings of two allelic mutants unambiguously identified the causal gene. Thus, for any species amenable to mutagenesis, NIKS enables forward genetics without requiring segregating populations, genetic maps and reference sequences. Forward genetic screens have been of fundamental importance in elucidating biological mechanisms in model species 1 . Their success, however, has relied on the feasibility of mutant gene isolation. Identification of causal mutations typically begins with genetic mapping, followed by candidate gene sequencing and complementation studies using transformation. Advances in DNA sequencing technologies have tremendously accelerated genetic mapping by combining bulk segregant analysis, that is, pooling recombinant genomes, with whole-genome sequencing, usually referred to as mapping by sequencing 2,3 . This approach is now becoming standard for mutation mapping and identification in many model species 3-12 and has even been applied to decipher quantitative traits with complex genetic architectures 13, 14 . Recently, mutagen-induced changes have been used as novel markers, allowing mapping of mutations using isogenic mapping populations 10, 15 . Nevertheless, all mapping-by-sequencing methods rely on resequencing, a method for whole-genome reconstruction based on aligning sequences to a reference sequence. Therefore, this requirement restricts the application of the technique to species for which such a reference genome sequence is available. Many reference-sequence assembly projects are currently in progress, including ones for most of the major crop species and breeding animals. However, even with an existing reference sequence, extending mapping-by-sequencing methods beyond the sequenced reference accessions has proved technically challenging. Mutant alleles of genes that are not present in the reference sequence cannot be identified within resequencing data alone. In particular, fast-evolving genes, such as those involved in disease resistance, might not always be represented in the reference sequence 16, 17 . Alternative solutions for mapping-by-sequencing in species without reference sequences have been proposed, such as mappingby-sequencing based on reference sequences of related species or expressed sequence tag collections 11, 18 . However, all of these methods greatly rely on low sequence divergence and high levels of synteny between the mutant genome and alignment target. Recently, methods for direct genome comparison of multiple samples without a reference sequence were introduced, but none has proven to be accurate and precise enough for the identification of mutations [19] [20] [21] . NIKS is a method for reference-free genome comparison based solely on the frequencies of short subsequences within whole-genome sequencing data. It is geared toward identifying mutagen-induced, small-scale, homozygous differences between two highly related genomes, independent of their inbred or outbred background, and provides a route to identification of mutations without requiring any prior information about reference sequences or genetic maps. Principles and performance of NIKS NIKS relies on the analysis of k-mers, which are defined as subsequences of length k of a sequencing read. NIKS starts by assessing the frequency of each k-mer within the sequencing data of each sample using the k-mer-counting software Jellyfish 22 . K-mers that overlap with sequencing errors will be of low frequency, as these errors are not present in all reads from the corresponding region, and it is therefore possible to separate them from reads that are error free (Fig. 1) .
NIKS REF is designed for whole-genome sequencing protocols.
1966019
Mutation identification by direct comparison of whole-genome sequencing data from mutant and wild-type individuals using k-mers
{ "venue": "Nature Biotechnology", "journal": "Nature Biotechnology", "mag_field_of_study": [ "Medicine", "Biology" ] }
Private information retrieval (PIR) protocols allow a user to retrieve a data item from a database without revealing any information about the identity of the item being retrieved. Specifically, in information-theoretic k-server PIR, the database is replicated among k non-communicating servers, and each server learns nothing about the item retrieved by the user. The cost of PIR protocols is usually measured in terms of their communication complexity, which is the total number of bits exchanged between the user and the servers. However, another important cost parameter is the storage overhead, which is the ratio between the total number of bits stored on all the servers and the number of bits in the database. Since single-server information-theoretic PIR is impossible, the storage overhead of all existing PIR protocols is at least 2 (or k, in the case of k-server PIR). In this work, we show that information-theoretic PIR can be achieved with storage overhead arbitrarily close to the optimal value of 1, without sacrificing the communication complexity. Specifically, we prove that all known k-server PIR protocols can be efficiently emulated, while preserving both privacy and communication complexity but significantly reducing the storage overhead. To this end, we distribute the n bits of the database among s + r servers, each storing n/s coded bits (rather than replicas). Notably, our coding scheme remains the same, regardless of the specific k-server PIR protocol being emulated. For every fixed k, the resulting storage overhead (s + r)/s approaches 1 as s grows; explicitly we have r k √ s 1 + o(1) . Moreover, in the special case k = 2, the storage overhead is only 1 + 1 s . In order to achieve these results, we introduce and study a new kind of binary linear codes, called here k-server PIR codes. We then show how such codes can be constructed from Steiner systems, from one-step majoritylogic decodable codes, from constant-weight codes, and from certain locally recoverable codes. We also establish several bounds on the parameters of k-server PIR codes, and tabulate the results for all s 32 and k 16. Finally, we briefly discuss extensions of our results to nonbinary alphabets, to robust PIR, and to t-private PIR.
PIR codes were suggested in REF to decrease storage overhead in PIR schemes preserving both privacy and communication complexity.
14437868
PIR with Low Storage Overhead: Coding instead of Replication
{ "venue": "ArXiv", "journal": "ArXiv", "mag_field_of_study": [ "Computer Science", "Mathematics" ] }
Abstract. We investigate determinacy of delay games with Borel winning conditions, infinite-duration two-player games in which one player may delay her moves to obtain a lookahead on her opponent's moves. First, we prove determinacy of such games with respect to a fixed evolution of the lookahead. However, strategies in such games may depend on information about the evolution. Thus, we introduce different notions of universal strategies for both players, which are evolution-independent, and determine the exact amount of information a universal strategy needs about the history of a play and the evolution of the lookahead to be winning. In particular, we show that delay games with Borel winning conditions are determined with respect to universal strategies. Finally, we consider decidability problems, e.g., "Does a player have a universal winning strategy for delay games with a given winning condition?", for ω-regular and ω-context-free winning conditions.
Finally, all delay games with Borel winning condition are determined REF , which was shown by a reduction to delay-free games that preserves Borelness of the winning condition.
14726998
What are Strategies in Delay Games? Borel Determinacy for Games with Lookahead
{ "venue": "ArXiv", "journal": "ArXiv", "mag_field_of_study": [ "Computer Science", "Mathematics" ] }
We model a single-hop mobile network under centralized control with N service classes as a system of N weighted cost parallel queues with M~1 Յ M Ͻ N ! servers, arrivals, varying binary connectivity, and Bernoulli service success at each queue+ We consider scheduling problems in this system and, under various assumptions on arrivals and connectivity, derive conditions sufficient, but not necessary, to guarantee the optimality of an index policy+ Consider a system of N mobiles communicating in discrete-time with a central network controller~e+g+, base station or satellite! that has M channels for message communication+ At each time slot, each mobile transmits a short control pulse~mes-sage! to the controller; the control message contains information about the type and number of data messages the mobile wants to send to the controller+ If the control message is received by the controller, the mobile is connected to the controller for *This research was supported in part by ARO grant DAAH04-96-1-0377 and AFOSR grants F49620-96-1-0028 and F49620-98-1-0370+ Probability in the Engineering and Informational Sciences, 14, 2000, 259-297+ 259 that time slot+ Hence, the controller knows at each time slot the mobiles that are connected to it, and the type and amount of information each mobile has to transmit+ Based on this information, the controller must decide how to dynamically allocate its channels over time so that it can minimize the expected discounted weighted flowtime associated with the transmission of messages+ We formulate an abstract problem that captures essential features of the singlehop network described above, and analyze several variants of that problem+ The general abstract problem can be described as follows+ We consider a discrete-time model of N queues served by M servers~M Ͻ N !+ At each time, at most one server can serve a queue+ At each time, a queue is either available to be served by any server~connected! or it is not~not connected!+ At each time, before the allocation of servers, the connectivity of all the queues is known for that time+ We allow for arrivals at each queue at each time, and arrivals at a given time are assumed to occur before server allocation at that time+ The statistics of the connectivity and arrival processes are assumed arbitrary+ When a server has been allocated to a connected queue, there is a probability, fixed for each queue, that the service is successful+ This service success process is i+i+d+ and independent among queues, although the success probability can be queue dependent+ We wish to determine a server allocation policy p which minimizes where F 0 summarizes all information available at the beginning of the allocation period+ C p is the cost under p, given by b is the discount factor, T is the finite horizon, c i is the holding cost of queue i~Q i !, by which we distinguish service class, and x t i is the length of Q i at time t+ In Problem~P! we have assumed that the horizon T is finite+ We first analyze Problem~P! and its refinements as finite horizon problems, and then show that the results of the analysis hold for the corresponding infinite horizon problems+ As mentioned, the model described in Problem~P! arises in the context of single-hop mobile radio networks, which can be modeled as a bank of message queues served by one or more communication channels+ The varying connectivity relates to a variety of mobile communication systems, such as cellular and mobile packet radio networks @10#, satellite communications @3#, and meteor-burst channels @4#+ The cost function then reflects the penalty for keeping packets waiting in each queue, and the queue weighting allows for a prioritization of packets for transmission+ The same model arises in image formation systems, where service decisions correspond to sensor allocations for specific surveillance areas for information-gathering purposes+ The model also has independent interest as a specific problem in queueing theory+ C. Lott and D. Teneketzis In @2,13#, the problem of N queues with different holding costs and one server with full connectivity was considered, and the simple cµ rule was shown to be optimal, but this result does not generalize to Problem~P!+ Our model is similar to that in @10#, but only a single server and no differentiated service are considered in that article+ The authors of @10# prove that the Longest Connected Queue~LCQ! policy is optimal+ The authors of @1# and @12# use similar but more general models than @10#, which also do not include differentiated service+ All three references determine policies that maximize throughput over an infinite horizon+ Throughput maximization over an infinite horizon leads to a family of scheduling policies, not all of which maximize throughput over a finite horizon+ Furthermore, throughput maximization is not the most appropriate performance criterion for networks providing multiple classes of service+ In @14#, the authors determine an optimal control policy for a non-Markovian M-server M-queue system in the presence of a continuous-time-varying external disturbance process, where control decisions occur only at fixed epochs and where service at a queue might interfere with service at other queues+ Optimality is defined in a maximal throughput sense and is proved using stability arguments+ The authors of @3# study a model of satellite network connectivity which, on the surface, seems similar to @10#+ However, by focusing on the question of server preemption, their model, and hence the nature of the optimal policy, is actually quite different+ A class of optimal adaptive policies for the model of @3# is proposed in @11#+ Again, this work does not incorporate multiple service classes+ To analyze meteor-burst communications, a discrete-time Markov chain model is proposed in @4# for the varying communications medium+ In @5#, a careful analysis of a single-server single-queue system is performed, where the server alternates between being "on" and "off+" In @4,5#, the goal is the analysis of detailed queue attributes under a fixed service policy, and questions of determining an optimal server allocation policy in a queueing network do not arise+ There are problems related to Problem~P!~e+g+, see @9# and the references therein!, where each of N queues has its own server and a controller must decide how to route arriving packets to the set of queues+ The structure of these problems leads to somewhat different solutions, such as in @9# where packet type is not distinguished and the queues have finite capacity+ The model of this article can be considered a special case of a restless multiarmed bandit, in the sense of @16#, where state transitions and rewards take on a particular structure+ The general restless bandit problem, where the number of arms and processors is infinite and their ratio is fixed, was investigated in @16# and @15#+ The results of @16# and @15# do not apply here, as the number of arms~queues! and processors is finite+ The system stability0maximal throughput approach used in much of the work just discussed implicitly assumes that all jobs are identical+ This implies that the control problem consists entirely in keeping the server~s! busy as much as possible+ Hence, optimal policies are those which keep the queues load balanced+ This situation changes when we allow jobs to have different service classes+ For example, for the case of a full-connectivity multiserver system with different hold- ing costs, it is easy to construct an example where the longest connected queue protocol is not optimal+ We can similarly construct an example demonstrating that the index policy is not optimal for a single-server varying connectivity system with different holding costs+ To the best of our knowledge, no results are presently available for the multiserver scheduling of parallel queues with connectivity constraints and multiple service classes+ In this article, we investigate an instance of multiserver scheduling of parallel queues with connectivity constraints and multiple service classes+ The main contribution of the article is the determination of conditions on job~message! weighting and job service times sufficient to guarantee the optimality of an index policy for Problem~P!+ We show by example that the above mentioned conditions are not necessary to ensure the optimality of the index policy+ The article is organized as follows+ In Section 2, Problem~P! is analyzed; a condition sufficient to guarantee the optimality of an index policy is presented+ In Section 3, various refinements to Problem~P!, including more specific assumptions on the arrival and connectivity processes, are considered+ These refinements allow for improved conditions which are still sufficient to guarantee optimality of the same index policy+ Section 4 contains a brief discussion of the infinite horizon problem and Section 5 summarizes the article+ Finally, Appendices A-F provide detailed proofs of some technical statements in the main body of text, not included there to improve the flow of the basic arguments+ To proceed with the analysis, we need the following definitions and notation+ N The number of queues+ M The number of servers+ T The length of the finite time horizon+ The cost RV associated with policy p, as defined in~2!+ By an index policy, we mean any policy which attaches a fixed numeric index to each queue and then plays the M connected and nonempty queues of highest index at each time+ The index of a queue refers to this numeric index, and not the queue's subscript+ As stated in Problem~P!, we wish to find an allocation policy p which minimizes J T p , defined in~1! as To achieve this, we need~1! a result that describes the effect of the initial condition on the performance of any index policy p and~2! an expression that specifies the difference in performance between any two policies, say J p and [ p, under the same initial condition+ We begin with the result that describes the effect of the initial condition on the performance of any index policy p+ Proof: First, note that if x 0 i ϭ 0, we have x 0 ϭ x 0 iϪ , and then the lemma is clearly true+ We henceforth assume x 0 i . 0, so that x 0 x 0 iϪ , and proceed by induction on T+ First, we define the following events: A1! At t ϭ1, the same queues are being served by our index policy p under x 0 and x 0 iϪ + A2! At t ϭ 1, M Ϫ 1 of the servers are allocated to the same queues by policy p under x 0 and x 0 iϪ ; the Mth server is allocated to Q i under x 0 and to Q j under x 0 iϪ , where Q j is of lower index than Q i + Because p is an index policy, a service difference occurs at t ϭ 1 only when x 0 i ϭ 1, p allocates to Q i under x 0 , and, under x 0 iϪ , p allocates to Q j , a queue of lower priority than Q i , instead of the empty Q i + Hence, events~A1! and~A2! partition the space of possible events+ We now proceed with the induction+ Let T ϭ 1+ Under~A1!, the cost difference is 1 For RVs x and y, event A, and s-field F, we define E @y6A, x, F # to mean E @y6A പ s~x! ∨ F #+ Under~A2!, the cost difference is because service success is assumed i+i+d+ and independent among queues+ From~5! and~6!, the induction basis step is established+ Now, assume~4! holds for an arbitrary T and consider the case of a T ϩ 1 horizon, labeling time so that the theorem is true over 2, + + + ,T ϩ1+ Then, under~A1!, we have because service success in both cases is the same+ By the induction hypothesis, Under~A2!, again since service is i+i+d+ and independent among queues, we find that where x 1 is the state resulting from x 0 when no service completion is achieved at t ϭ 1 at Q i and Q j + By the induction hypothesis, and by applying the induction hypothesis twice in succession, we get Combining~10!-~13!, we obtain Because A1 and A2 are a partition of all possibilities arising from the allocation of the M servers under the two different initial conditions~under the same index policy!,~9! and~14! imply that and the induction step is proved+ Hence the proof of Lemma 1 is complete+ Ⅲ Next, we derive an expression that specifies the difference in performance between any two policies J p and [ p under the same initial condition+ Earlier, we have defined C [ p to be the cost RV under [ p and C J p to be the cost RV under J p+ We now define R to be the difference To show J p optimal, it suffices to prove that for any other allocation policy [ p+ To prove~16!, consider two given policies [ p and J p and assume that at t ϭ1, they both run on the same system and that both Q i and Q j are connected and nonempty at this time+ Assume J p chooses Q i and not Q j , and [ p chooses Q j and not Q i at t ϭ1, and further assume that all other server allocations are the same for the two policies at t ϭ 1+ Recall that h t i is the service success indicator variable at time t for Q i + We then have where the last equality follows because service success is i+i+d+ and independent among queues and independent of all arrivals and connectivities+ We use~17! to prove the main result of this section, which is given by the following theorem+ Discussion: The essence of the result of Theorem 1 is the following: If we were guaranteed that the system described in Problem~P! operated away from the boundary all the time~i+e+, if the queues were continuously nonempty!, then it would be optimal to always allocate the M servers to the queues with the M highest indices+ Near the boundary, server utilization~because of empty queues! becomes a critical issue in determining an optimal server allocation strategy+ The index policy, which allocates servers without taking into account the number of customers in the queues, may result in server underutilization; thus, it may not be optimal near the boundary+ Consequently, if we require optimality of the index policy for Problem~P!, we must identify conditions to ensure that the advantage gained by always allocating the servers to the highest-index queues overcompensates potential losses resulting from server underutilization near the boundary+ Such a condition is expressed by~18!, which requires that the indices associated with the queues be sufficiently separated from each other+ Such a separation results in a priority ordering of the queues sufficient to guarantee the optimality of the index rule+ It is interesting to compare this result with the case in which all queues have the same weighting+ Then~see @10,12#!, the optimal policy is to serve the connected queues of longest length~LCQ!+ The intuition here is that, at a given time, serving any queue gives the same expected return as serving any other, so the optimal server allocation is the one which optimizes the expected number of services over the horizon+ The LCQ policy accomplishes this by minimizing the number of empty queues, trying to avoid the situation where connected empty queues and disconnected nonempty queues reduce service possibility+ On the other hand, Theorem 1 demonstrates the complementary case in which queues have sufficient spacing in weighting, so by serving the costlier queues first, any server underutilization~inef-ficiency! is sufficiently compensated+ Between these two extremes, there is a region of queue cost such that these two competing goals conflict, and the optimal policy becomes quite difficult to specify+ Proof~Theorem 1!: Assume that the queues have been labeled such that~18! is satisfied+ Consider M additional queues in the system, numbered N ϩ1, N ϩ 2, + + + , Nϩ M, each with c Nϩi ϭ m Nϩi ϭ 0+ Condition~18! is still satisfied with these extra because of~18! and the fact that~1 Ϫ b!0@1 Ϫ~1 Ϫ m i !b# Յ 1+ Repetition of the argument leading to~19! shows that the assertion of the theorem is true for T ϭ 1+ To proceed with the induction, we assume that the assertion of the theorem is true when the horizon is T and prove that the index policy described in the statement of the theorem is optimal when the horizon is T ϩ1+ Consider policies [ p and J p with the following features: p and J p allocate M Ϫ 1 of the servers to the same queues; the M th server is allocated to Q i by policy J p and to Q j by policy [ p, where i Ͻ j+ Without any loss of generality, we assume that i Ͻ N+ F2! From time t ϭ 2 on, policies J p and [ p follow the optimal allocation policy for the T-horizon problem+ We define Q ϩ :ϭ $the set of queues with indices larger than that of Q j %, Q Ϫ :ϭ $The set of queues with indices equal to or less than that of Q j %, Using the characteristics of policies [ p and J p, the above definitions, and~28! and 29!, we proceed to complete the induction step by determining a lower bound on E @R6F 0 # at t ϭ 1+ To accomplish this, we examine separately each of the four terms appearing in the right-hand side of~17!+ We prove that Combining~17! and~30!-~33!, we obtain Because of~28! and~29!, The last inequality in~35! is true because service success at Q i under [ p is independent of the event Dh 1 i ϭ 1 as well as the initial information state F 0 , because service Combining~34!-~36!, we obtain and because of~18!, Consequently, policy J p is superior to [ p for the~T ϩ1!-horizon problem+ Repetition of the argument leading to~38! shows that, under~18!, the index policy described in the statement of the theorem is optimal for the~T ϩ 1!-horizon problem+ To complete the proof of the induction step, we must prove~30!-~33!+ The proofs of these equations are located in Appendices A-D+ With~30!-~33!, the induction step is complete and the theorem is proved+ Ⅲ Remark: The interchange argument used to prove the optimality of the cm rule in @2# cannot be applied to the case of either multiserver or that of varying connectivity, because it is not possible to guarantee that such an interchange time occurs+ We present two examples to illustrate the role of Condition~18!+ The first example shows that if Condition~18! is not satisfied, the index policy is not, in general, optimal+ The second example shows that Condition~18! is not necessary to guarantee the optimality of the index policy+ Example 1: Let T ϭ 2, N ϭ 2, M ϭ1, m 1 ϭ m 2 ϭ1, x 0 1 ϭ x 0 2 ϭ 1, c 1 Ͼ c 2 ϭ 0+9c 1 , and b ϭ 0+5+ Assume there are no arrivals, and connectivity is i+i+d+ with q 1 ϭ 1 and q 2 ϭ 0+5+ Then, and From~39! and~40!, we conclude that Q 1 has the higher index, but Condition~18! of Theorem 1 is not satisfied+ Denote by J p the index policy~i+e+, the policy that gives priority to Q 1 when both queues are connected and nonempty!, and by [ p, the policy that gives priority to Q 2 when both queues are connected and nonempty+ Then, J p is nonoptimal, because in the case where both queues are initially connected, Example 2: Consider the same situation as in Example 1 with the one difference: c 2 ϭ 0+7c 1 + Then,~39! is valid and Condition~18! of Theorem 1 is not satisfied because Nevertheless, the index policy J p is optimal because The result of Theorem 1 can be graphically described for a system consisting of two queues~N ϭ 2! and one server~M ϭ1!+ The graphical description is based on the following summary of the result of Theorem 1 for this case+ When both queues are connected at a given time, If we consider c 2 m 2 fixed and imagine varying c 1 m 1 , then, because of~42! and~43!, the space of possible values of c 1 m 1 is divided into three regions~shown in Fig+ 1!: 1! Serve Q 1 when both queues are connected;~2! serve Q 2 when both queues are connected;~3! optimal policy is unspecified+ The third region exists because~42! and~43! do not cover all possibilities+ The intuition developed from this simple case extends naturally to the general case+ Note that as b r 0, the region in Figure 1 where the optimal policy is not specified becomes empty+ This is the situation where future cost has no effect on current decisions, and so the best policy minimizes cost at the current time step only+ For this case, the optimal policy is a greedy algorithm~i+e+, the cm rule!+ 270 C. Lott and D. Teneketzis Condition~18!, which is sufficient to ensure the optimality of the index policy for Problem~P!, was derived under no assumptions on the arrival and connectivity processes+ The result of Theorem 1 can be strengthened under explicit assumptions on the aforementioned processes+ In this section we examine several instances of Problem~P! that arise under various assumptions on the arrival and connectivity processes+ We show that Condition~18! can be improved when more information is given about the arrival and0or connectivity processes+ As a reminder, the assumption of i+i+d+ service holds throughout this section of the article+ We assume i+i+d+ queue connectivity, independent of the service success process, but leave arrivals arbitrary+ We do not require independence of the connectivity between queues, only i+i+d+ for any given queue, and independence of service success for all queues+ We prove the following variant of Theorem 1+ Proof: We assume~44! is satisfied and proceed by induction+ We show first that for T ϭ 1, the result of Theorem 2 is true under~44! and then that the induction step holds+ First, note that So if~44! is satisfied, then necessarily m i c i Ն m j c j + Then, for T ϭ 1, the proof of the result of the theorem is the same as in Theorem 1+ We proceed with the induction step+ First, note that the arguments leading tõ 34! do not depend on the specific form of~18!; hence, the same arguments are valid here as well+ So,~34! is true+ Define Just as for t 1 , we define t 3 ϭ T ϩ 1 whenever t 3 Ͼ T+ As earlier, we know that t 1 Ն t s a+s+, where t s is defined by~26!+ We also have that t s Ն t 3 a+s+, which follows directly from their definitions+ Furthermore, by its definition, the independence of service success, and the i+i+d+ nature of connectivity at Q i , t 3 is independent of other system processes+ We thus obtain Next, we compute the right-hand side of~46!: Because of~46! and~47!, 272 Combining~34! and~48!, Inequality~44! then implies DJ Ն 0, and the induction step is completed by arguments identical to those following~38! in the proof of Theorem 1+ Under the additional assumption of i+i+d+ connectivity, Theorem 2 provides an improved sufficiency condition over Theorem 1+ For fixed q i m i , just as earlier, b r 0 means a reversion of the cm rule+ The difference between the problem considered in this section and Problem~P! is that in the "sufficiency factor"~1Ϫb!0@1Ϫ~1Ϫm i !b#, describing the separation of indices in~18!, m i is replaced by q i m i + Therefore, the rate at which Q i can be served is reduced by both connectivity and service probability, and this leads to a condition that is weaker~i+e+, better! than~18! and is sufficient to guarantee the optimality of the index policy+ In contrast to Section 3+1, here we assume Bernoulli arrivals and arbitrary connectivity+ The arrivals do not need to be independent among queues+ We prove the following variant of Theorem 1+ Proof: We begin with~34! because the arguments leading to it are as in the proof of Theorem 1+ Assume~49! is satisfied and note that Then, necessarily ∀i, j, m i c i Ն m j c j + Then, for T ϭ 1, the assertion of the theorem can be established in the same way as in Theorem 1+ To prove the induction step, we begin with~34! and define the stopping times: Just as earlier, we define t 3 ϭ T ϩ 1 whenever t 3 Ͼ T, and similarly for t 4 + It is immediate that t 1 Ն t 3 a+s+ @t 1 is defined by~22!#, as a minimum requirement for t 1 ϭ t is that Q i be empty for J p and service is possible at t+ If an arrival just occurred, the queue cannot be empty+ Also from the definitions, it is immediate that t 3 Ն t 4 a+s+ Moreover, by its definition, the independence of service success, and the i+i+d+ nature of arrivals at Q i , t 4 is independent of other system processes+ Hence, we obtain Letting S a i ϭ1 Ϫ a i , the probability of no arrival, we obtain for the right-hand side of 52!, Because of~52! and~53!, 274 Combining~34! and~54!, we find that Condition~49! then implies DJ Ն 0, and the rest of the induction step follows as in Theorem 1+ Ⅲ In this section, we assume both Bernoulli queues connectivity and Bernoulli arrivals+ Then, we get the following condition sufficient to guarantee the optimality of an index policy+ Proof: We assume~55! is satisfied and note that Then, c i m i Ն c j m j , ∀i, j, i Ͻ j, and for T ϭ 1, the proof of Theorem 4 is the same as that of Theorem 1+ To establish the induction step, we note that the arguments leading to~34! are the same as in Theorem 1 and define the stopping time t 5 ϭ min$t Ն 2: there is a service success and no arrival at Q i under [ p6Q i is always served when connected under [ p% and let t 5 ϭ T ϩ 1 whenever t 5 Ͼ T+ From the definitions, it follows that t 1 Ն t 5 a+s+ Note that for i+i+d+ arrivals and connectivity, the system is a Markov chain, and so F 0 information is summarized in x 0 ; from its definition, t 5 is independent of the initial state and service success at t ϭ 1+ Consequently, Letting S a i ϭ1 Ϫ a i , the probability of no arrival, we obtain for the right-hand side of 56!, Because of~56! and~57! and the Markovian property of the system, Combining~34! and~58!, we find Condition~55! then implies DJ Ն 0, and the rest of the induction step is the same as in Theorem 1+ We can improve the result of Theorem 4 by performing a more careful analysis of a first hitting time bound for t 1 + The resulting sufficient condition is more relaxed than~55!, but the expression describing the condition is more complicated and less intuitive+ and for any finite dimension L, define the L ϫ L matrix A i as then, at any time t, it is optimal to serve the M connected queues of highest index, or serve all queues if less than M are connected+ Further, the "sufficiency factor" is monotonic in the size of A i ; that is, Proof: The proof of Theorem 5 is given in Appendix E+ In this section, we drop the i+i+d+ assumption on arrivals and connectivity+ Under certain statistical assumptions on the arrival and connectivity processes, which are stated precisely in the following theorem, we derive a condition sufficient to guarantee the optimality of the index policy described in Section 2+ We show that this condition improves~18!+ Our results are summarized in the following theorem+ and if Q i is served at t+ Then, if there is a labeling of the queues such that and Condition~D1!, respectively, in @7#+ Condition~62! as well as Conditioñ D1! in @7# provide conditions on the drift of a process $x t , t Ն 0%; they differ in their requirements on the direction of the drift bound+ 2+ In general, it is difficult to explicitly specify the h i and r i which satisfy~64! and give the optimal condition in~65! and~66!, although they can be determined numerically+ 3+ As an example, from~65! we can achieve the condition of~18! when h i ϭ Ϫln b and r i ϭ10b Ϫ~10m i !@1 Ϫ~1 Ϫ m i !b#+ This gives a r i Ն 1 only when which is easier to check and implies~64!+ However, there may be cases where~64! is satisfied and~67! is not+ We summarize the results in Table 1+ From the table, and from previous derivations, we observe that always and under certain conditions~E1! Յ~E6!+ The "sufficiency factors" that appear in the table indicate how much the indices of different queues must be separated from one another to guarantee that the index policy is optimal+ The above inequalities are intuitively pleasing because they show that as the statistical description of the sys- Arbitrary arrivals, arbitrary connectivity tem becomes more detailed, the sufficient conditions for the optimality of the index policy improve; that is, the additional statistical information about the system is used to reduce the separation among the queues' indices while the optimality of the index policy is maintained+ Comment: Note that the number of servers M never enters explicitly in the arguments of the proofs in Sections 2 and 3+ Therefore, we believe that even when the number of servers is a random function of time, the conditions sufficient to guarantee the optimality of the cm rule for the channel allocation problem described in this article are the same as those shown in Table 1+ 4. INFINITE HORIZON The index policy described in Section 2 is optimal for all finite-horizon problems under the conditions of Theorems 1-6+ Because the conditions of these theorems do not depend on the horizon T, one can prove by a simple contradiction argument that the same index policy is optimal for the corresponding infinite horizon problems under the same conditions+ We have shown that there are conditions on system parameters which guarantee the optimality of an index policy for the N-queue M-server system with arrivals and varying connectivity+ These conditions depend on the statistical assumptions made for the problem, as well as the value of key system parameters, such as arrival rate, connection probability, and service probability+ The interest in this problem arises from its applicability to several important systems, including mobile communication networks under centralized control and image formation systems+
In REF , the problem of multichannel allocation in single-hop mobile networks with multiple service classes was formulated as an RMAB, and sufficient conditions for the optimality of a myopictype index policy were established.
1055866
2000), ‘On the Optimality of An Index Rule in Multichannel Allocation for Single-Hop Mobile Networks with Multiple Service Classes
{ "venue": "Probability in the Engineering and Informational Sciences", "journal": null, "mag_field_of_study": [ "Mathematics" ] }
Abstract-Proxy voting is a form of voting, where the voters can either vote on an issue directly, or delegate their voting right to a proxy. This proxy might for instance be a trusted expert on the particular issue. In this work, we extend the widely studied end-to-end verifiable Helios Internet voting system towards the proxy voting approach. Therefore, we introduce a new type of credentials, so-called delegation credentials. The main purpose of these credentials is to ensure that the proxy has been authorised by an eligible voter to cast a delegated vote. If voters, after delegating, change their mind and want to vote directly, cancelling a delegation is possible throughout the entire voting phase. We show that the proposed extension preserves the security requirements of the original Helios system for the votes that are cast directly, as well as security requirements tailored toward proxy voting.
A further proposal in REF extends the Helios voting system with delegated voting functionality.
16244359
Introducing Proxy Voting to Helios
{ "venue": "2016 11th International Conference on Availability, Reliability and Security (ARES)", "journal": "2016 11th International Conference on Availability, Reliability and Security (ARES)", "mag_field_of_study": [ "Computer Science" ] }
Abstract-Mobile social networks (MSNs) are a kind of delay tolerant network that consists of lots of mobile nodes with social characteristics. Recently, many social-aware algorithms have been proposed to address routing problems in MSNs. However, these algorithms tend to forward messages to the nodes with locally optimal social characteristics, and thus cannot achieve the optimal performance. In this paper, we propose a distributed optimal Community-Aware Opportunistic Routing (CAOR) algorithm. Our main contributions are that we propose a home-aware community model, whereby we turn an MSN into a network that only includes community homes. We prove that, in the network of community homes, we can still compute the minimum expected delivery delays of nodes through a reverse Dijkstra algorithm and achieve the optimal opportunistic routing performance. Since the number of communities is far less than the number of nodes in magnitude, the computational cost and maintenance cost of contact information are greatly reduced. We demonstrate how our algorithm significantly outperforms the previous ones through extensive simulations, based on a real MSN trace and a synthetic MSN trace.
In the work of Xiao et al, REF a distributed optimal community-aware opportunistic routing algorithm was proposed and a home-aware community model was built, whereby the MSN was turned into a network, which only included communities.
382075
Community-Aware Opportunistic Routing in Mobile Social Networks
{ "venue": "IEEE Transactions on Computers", "journal": "IEEE Transactions on Computers", "mag_field_of_study": [ "Computer Science" ] }
Distributed systems and applications are often expected to enforce high-level authorization policies. To this end, the code for these systems relies on lower-level security mechanisms such as digital signatures, local ACLs, and encrypted communications. In principle, authorization specifications can be separated from code and carefully audited. Logic programs in particular can express policies in a simple, abstract manner. We consider the problem of checking whether a distributed implementation based on communication channels and cryptography complies with a logical authorization policy. We formalize authorization policies and their connection to code by embedding logical predicates and claims within a process calculus. We formulate policy compliance operationally by composing a process model of the distributed system with an arbitrary opponent process. Moreover, we propose a dependent type system for verifying policy compliance of implementation code. Using Datalog as an authorization logic, we show how to type several examples using policies and present a general schema for compiling policies.
Fournet et al. REF consider the problem of verifying whether a distributed system correctly implements a target authorization policy, expressed as statements and expectations.
1232197
A type discipline for authorization policies
{ "venue": "TOPL", "journal": null, "mag_field_of_study": [ "Computer Science" ] }