src
stringlengths
100
132k
tgt
stringlengths
10
710
paper_id
stringlengths
3
9
title
stringlengths
9
254
discipline
dict
An answer to a query has a well-defined lineage expression (alternatively called how-provenance) that explains how the answer was derived. Recent work has also shown how to compute the lineage of a non-answer to a query. However, the cause of an answer or non-answer is a more subtle notion and consists, in general, of only a fragment of the lineage. In this paper, we adapt Halpern, Pearl, and Chockler's recent definitions of causality and responsibility to define the causes of answers and non-answers to queries, and their degree of responsibility. Responsibility captures the notion of degree of causality and serves to rank potentially many causes by their relative contributions to the effect. Then, we study the complexity of computing causes and responsibilities for conjunctive queries. It is known that computing causes is NP-complete in general. Our first main result shows that all causes to conjunctive queries can be computed by a relational query which may involve negation. Thus, causality can be computed in PTIME, and very efficiently so. Next, we study computing responsibility. Here, we prove that the complexity depends on the conjunctive query and demonstrate a dichotomy between PTIME and NP-complete cases. For the PTIME cases, we give a non-trivial algorithm, consisting of a reduction to the max-flow computation problem. Finally, we prove that, even when it is in PTIME, responsibility is complete for LOGSPACE, implying that, unlike causality, it cannot be computed by a relational query.
al. REF study the concepts of causality and responsibility of instance-based explanations for data present or missing in a conjunctive query result.
11637388
The Complexity of Causality and Responsibility for Query Answers and non-Answers
{ "venue": "PVLDB", "journal": "PVLDB", "mag_field_of_study": [ "Computer Science" ] }
Abstract. Over the past years, the paradigm of component-based software engineering has been established in the construction of complex mission-critical systems. Due to this trend, there is a practical need for techniques that evaluate critical properties (such as safety, reliability, availability or performance) of these systems. In this paper, we review several high-level techniques for the evaluation of safety properties for component-based systems and we propose a new evaluation model (State Event Fault Trees) that extends safety analysis towards a lower abstraction level. This model possesses a state-event semantics and strong encapsulation, which is especially useful for the evaluation of component-based software systems. Finally, we compare the techniques and give suggestions for their combined usage.
Grunske et al. REF present a methodology for model-based hazard analysis for component-based software systems based on State Event Fault Trees.
9370124
Model-driven safety evaluation with state-event-based component failure annotations
{ "venue": "CBSE, Lecture Notes in Computer Science 3489 (2005", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
Abstract-The energy consumption due to input-output pins is a substantial part of the overall chip consumption. To reduce this energy, this work presents the working-zone encoding (WZE) method for encoding an external address bus, based on the conjecture that programs favor a few working zones of their address space at each instant. In such cases, the method identifies these zones and sends through the bus only the offset of this reference with respect to the previous reference to that zone, along with an identifier of the current working zone. This is combined with a one-hot encoding for the offset. Several improvements to this basic strategy are also described. The approach has been applied to several address streams, broken down into instructiononly, data-only, and instruction-data traces, to evaluate the effect on separate and shared address buses. Moreover, the effect of instruction and data caches is evaluated. For the case without caches, the proposed scheme is specially beneficial for dataaddress and shared buses, which are the cases where other codings are less effective. On the other hand, for the case with caches the best scheme for the instruction-only and data-only traces is the WZE, whereas for the instruction-data traces it is either the WZE or the bus-invert with four groups (depending on the energy overhead of these techniques). Index Terms-Address bus, encoding for low power, lowpower, microprocessor, input-output energy.
Mussol et al. proposed a Working Zone Encoding (WZE) technique REF , based on the principle of locality of the addresses on the bus.
14473884
Working-zone encoding for reducing the energy in microprocessor address buses
{ "venue": "IEEE Trans. VLSI Syst.", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
The popularity of social networking greatly increases interaction among people. However, one major challenge remains -how to connect people who share similar interests. In a social network, the majority of people who share similar interests with given a user are in the long tail that accounts for 80% of total population. Searching for similar users by following links in social network has two limitations: it is inefficient and incomplete. Thus, it is desirable to design new methods to find like-minded people. In this paper, we propose to use collective wisdom from the crowd or tag networks to solve the problem. In a tag network, each node represents a tag as described by some words, and the weight of an undirected edge represents the co-occurrence of two tags. As such, the tag network describes the semantic relationships among tags. In order to connect to other users of similar interests via a tag network, we use diffusion kernels on the tag network to measure the similarity between pairs of tags. The similarity of people's interests are measured on the basis of similar tags they share. To recommend people who are alike, we retrieve top k people sharing the most similar tags. Compared to two baseline methods triadic closure and LSI, the proposed tag network approach achieves 108% and 27% relative improvements on the BlogCatalog dataset, respectively.
Others connect the like-minded users using the tag network inference REF .
9598274
Connecting users with similar interests via tag network inference
{ "venue": "CIKM '11", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
Many scientific applications can be structured as Parallel Task Graphs (PTGs), that is, graphs of dataparallel tasks. Adding data-parallelism to a task-parallel application provides opportunities for higher performance and scalability, but poses additional scheduling challenges. In this paper, we study the off-line scheduling of multiple PTGs on a single, homogeneous cluster. The objective is to optimize performance without compromising fairness among the PTGs. We consider the range of previously proposed scheduling algorithms applicable to this problem, both from the applied and the theoretical literature, and we propose minor improvements when possible. Our main contribution is an extensive evaluation of these algorithms in simulation, using both synthetic and real-world application configurations, using two different metrics for performance and one metric for fairness. We identify a handful of algorithms that provide good trade-offs when considering all these metrics. The best algorithm overall is one that structures the schedule as a sequence of phases of increasing duration based on a makespan guarantee produced by an approximation algorithm.
A study of algorithms to schedule multiple PTGs on a single homogeneous cluster is carried out by Casanova et al. REF .
173479
On cluster resource allocation for multiple parallel task graphs
{ "venue": "Journal of Parallel and Distributed Computing", "journal": "Journal of Parallel and Distributed Computing", "mag_field_of_study": [ "Computer Science" ] }
Abstract When dealing with complex systems, information is very often fragmented across many different models expressed within a variety of (modeling) languages. To provide the relevant information in an appropriate way to different kinds of stakeholders, (parts of) such models have to be combined and potentially revamped by focusing on concerns of particular interest for them. Thus, mechanisms to define and compute views over models are highly needed. Several approaches have already been proposed to provide (semi)automated support for dealing with such model views. This paper provides a detailed overview of the current state of the art in this area. To achieve this, we relied on our own experiences of designing and applying such solutions in order to conduct a literature review on this topic. As a result, we discuss the main capabilities of existing approaches and propose a corresponding research agenda. We notably contribute a feature model describing what we believe to be the most important characteristics of the support for views on models. We expect this work to be helpful to both current and potential future users and developers of model view techniques, as well as to any person generally interested in model-based software and systems engineering.
Only recently, it appeared review on the state of the art of modelling views REF .
23658100
A feature-based survey of model view approaches
{ "venue": "Software & Systems Modeling", "journal": "Software & Systems Modeling", "mag_field_of_study": [ "Computer Science" ] }
Over the last decade many techniques and tools for software clone detection have been proposed. In this paper, we provide a qualitative comparison and evaluation of the current state-of-the-art in clone detection techniques and tools, and organize the large amount of information into a coherent conceptual framework. We begin with background concepts, a generic clone detection process and an overall taxonomy of current techniques and tools. We then classify, compare and evaluate the techniques and tools in two different dimensions. First, we classify and compare approaches based on a number of facets, each of which has a set of (possibly overlapping) attributes. Second, we qualitatively evaluate the classified techniques and tools with respect to a taxonomy of editing scenarios designed to model the creation of Type-1, Type-2, Type-3 and Type-4 clones. Finally, we provide examples of how one might use the results of this study to choose the most appropriate clone detection tool or technique in the context of a particular set of goals and constraints. The primary contributions of this paper are: (1) a schema for classifying clone detection techniques and tools and a classification of current clone detectors based on this schema, and (2) a taxonomy of editing scenarios that produce different clone types and a qualitative evaluation of current clone detectors based on this taxonomy.
For a complete set see our recent survey of the state of the art REF .
17181849
Comparison and Evaluation of Code Clone Detection Techniques and Tools: A Qualitative Approach
{ "venue": "SCIENCE OF COMPUTER PROGRAMMING", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
Abstract The paper presents a system for automatic, georegistered, real-time 3D reconstruction from video of urban scenes. The system collects video streams, as well as GPS and inertia measurements in order to place the reconstructed models in geo-registered coordinates. It is designed using current state of the art real-time modules for all processing steps. It employs commodity graphics hardware and standard CPU's to achieve real-time performance. We present the main considerations in designing the system and the steps of the processing pipeline. Our system extends existing algorithms to meet the robustness and variability necessary to operate out of the lab. To account for the large dynamic range of outdoor videos the processing pipeline estimates global camera gain changes in the feature tracking stage and efficiently compensates for these in stereo estimation without impacting the real-time performance. The required accuracy for many applications is achieved with a two-step stereo reconstruction process exploiting the redundancy across frames. We show results on real video sequences comprising hundreds of thousands of frames.
Similarly, Pollefeys et al. REF build urban scene models out of geo-registered video frames.
2514257
Detailed Real-Time Urban 3D Reconstruction from Video
{ "venue": "International Journal of Computer Vision", "journal": "International Journal of Computer Vision", "mag_field_of_study": [ "Computer Science" ] }
Abstract-This paper studies the service time required to transmit a packet in an opportunistic spectrum access scenario, where an unlicensed secondary user (SU) transmits a packet using the radio spectrum licensed to a primary user (PU). Considering a cognitive radio network, it is assumed that during the transmission period of an SU multiple interruptions from PUs may occur, increasing the time needed to transmit a packet. Assuming that the SU's packet length follows a geometric distribution, we start by deriving the probability of an SU transmitting its packet when k > 0 periods of PU's inactivity are observed. As the main contribution of this paper, we derive the characteristic function of the service time, which is further used to approximate its distribution in a real-time estimation process. The proposed methodology is independent of the SUs' traffic condition, i.e., both saturated or non-saturated SU's traffic regime is assumed. Our analysis provides a lower bound for the service time of the SUs, which is useful to determine the maximum throughput achievable by the secondary network. Simulation results are used to validate the analysis, which confirm the accuracy of the proposed methodology.
More recently, REF proposed a theoretical characterization of the distribution of the service time when both saturated and nonsaturated traffic conditions occur, and variable-length packets are transmitted by the SUs.
13655066
Characterization of the Opportunistic Service Time in Cognitive Radio Networks
{ "venue": "IEEE Transactions on Cognitive Communications and Networking", "journal": "IEEE Transactions on Cognitive Communications and Networking", "mag_field_of_study": [ "Computer Science" ] }
Abstract-Intrusion detection has attracted a considerable interest from researchers and industries. The community, after many years of research, still faces the problem of building reliable and efficient IDS that are capable of handling large quantities of data, with changing patterns in real time situations. The work presented in this manuscript classifies intrusion detection systems (IDS). Moreover, a taxonomy and survey of shallow and deep networks intrusion detection systems is presented based on previous and current works. This taxonomy and survey reviews machine learning techniques and their performance in detecting anomalies. Feature selection which influences the effectiveness of machine learning (ML) IDS is discussed to explain the role of feature selection in the classification and training phase of ML IDS. Finally, a discussion of the false and true positive alarm rates is presented to help researchers model reliable and efficient machine learning based intrusion detection systems.
Hodo et al. REF reviewed machine learning techniques and their performance in detecting anomalies.
11381530
Shallow and Deep Networks Intrusion Detection System: A Taxonomy and Survey
{ "venue": "ArXiv", "journal": "ArXiv", "mag_field_of_study": [ "Computer Science" ] }
In this paper we illustrate how to perform both visual object tracking and semi-supervised video object segmentation, in real-time, with a single simple approach. Our method, dubbed SiamMask, improves the offline training procedure of popular fully-convolutional Siamese approaches for object tracking by augmenting their loss with a binary segmentation task. Once trained, SiamMask solely relies on a single bounding box initialisation and operates online, producing class-agnostic object segmentation masks and rotated bounding boxes at 55 frames per second. Despite its simplicity, versatility and fast speed, our strategy allows us to establish a new state-of-the-art among real-time trackers on VOT-2018, while at the same time demonstrating competitive performance and the best speed for the semisupervised video object segmentation task on DAVIS-2016 and DAVIS-2017. The project website is http://www. robots.ox.ac.uk/˜qwang/SiamMask.
Moreover, SiamMask REF combines the fully-convolutional Siamese tracker with a binary segmentation head for accurate tracking.
54475412
Fast Online Object Tracking and Segmentation: A Unifying Approach
{ "venue": "2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)", "journal": "2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)", "mag_field_of_study": [ "Computer Science" ] }
Abstract-As the service requirements of network applications shift from high throughput to high media quality, interactivity, and responsiveness, the definition of QoE (Quality of Experience) has become multidimensional. Although it may not be difficult to measure individual dimensions of the QoE, how to capture users' overall perceptions when they are using network applications remains an open question. In this paper, we propose a framework called OneClick to capture users' perceptions when they are using network applications. The framework only requires a subject to click a dedicated key whenever he/she feels dissatisfied with the quality of the application in use. OneClick is particularly effective because it is intuitive, lightweight, efficient, time-aware, and application-independent. We use two objective quality assessment methods, PESQ and VQM, to validate OneClick's ability to evaluate the quality of audio and video clips. To demonstrate the proposed framework's efficiency and effectiveness in assessing user experiences, we implement it on two applications, one for instant messaging applications, and the other for firstperson shooter games. A Flash implementation of the proposed framework is also presented.
In REF , a framework is proposed to capture users' perception while they are using network applications.
10545189
OneClick: A Framework for Measuring Network Quality of Experience
{ "venue": "IEEE INFOCOM 2009", "journal": "IEEE INFOCOM 2009", "mag_field_of_study": [ "Computer Science" ] }
It is very import for Chinese language processing with the aid of an efficient input method engine (IME), of which pinyinto-Chinese (PTC) conversion is the core part. Meanwhile, though typos are inevitable during user pinyin inputting, existing IMEs paid little attention to such big inconvenience. In this paper, motivated by a key equivalence of two decoding algorithms, we propose a joint graph model to globally optimize PTC and typo correction for IME. The evaluation results show that the proposed method outperforms both existing academic and commercial IMEs.
Moreover, REF introduced a model for IME typo correction.
16217436
A Joint Graph Model for Pinyin-to-Chinese Conversion with Typo Correction
{ "venue": "ACL", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
Abstract. How can a RFID (Radio Frequency Identification Devices) system prove that two or more RFID tags are in the same location? Previous researchers have proposed yoking-proof and grouping-proof techniques to address this problem -and when these turned out to be vulnerable to replay attacks, a new existence-proof technique was proposed. We critique this class of existence-proofs and show it has three problems: (a) a race condition when multiple readers are present; (b) a race condition when multiple tags are present; and (c) a problem determining the number of tags. We present two new proof techniques, a secure timestamp proof (secTS-proof) and a timestampchaining proof (chaining-proof) that avoid replay attacks and solve problems in previously proposed techniques.
In REF , the authors address the existence of race conditions when multiple reader/tags are presented or the number of participating tags is unknown.
12071528
Coexistence Proof Using Chain of Timestamps for Multiple RFID Tags ∗
{ "venue": "APWeb/WAIM Workshops", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
Abstracl-Large scale blerarchlcal cacbe. for web contell! have been deployed widely J.q lUI attempt to reduce delivery debYS aDd bandwldtb consumption and al80 to Improve tbe scalability of coutent dilll emmaUon through the world wide web. Irrespectively of tbe speclOc replacement algorithm employed In each cadle, a de faeto cbaraeterIsUc of contemporary hterardllcal cadles II that a hit for a document at an I-level cache Ipds to the eachlng of the docllment In aIl lntermed.iate caches Oevels 1-1, ... 11) on the path towards the leaf eache tbat received the Initial request this paper presents various algorithms that revise this stllldard behavior and attempt to be more seledive In choosing the caches that get to store a local copy of the requested document. As these algorithm. operate independently of the actual replacement algorithm running In each IDdlvldual cacbe, they are referred to al mas IlIgo,.JtJtm$. Three new meta algorithms are proposed and compared against tbe de facto one and a recently proposed Doe by means of synthetic and trace-drlven amuIa!fon •• The best of the new meta algorithm. appun to be leading to Improved performance ander mo.t simnlated scenarios, especially under a low availability of storage. Tbe latter observallon makes tbe presented meta a1goritbm. particularly favorable for the handOng ofJarge data objects lacb as stored music files or short video ellp •. Addilionally, a ample ioad balancing algorithm that Is based on the toncept of meta algorithms Is proposed aDd evaluated. The algorithm is sbown to be able to provIde for aD effective balandDg of load thus pouibly addressing the recentiy dlseovered "filterlng effect" in blerarchleal web caches.
LCD is proposed in REF for web caching.
344012
Meta algorithms for hierarchical Web caches
{ "venue": "IEEE International Conference on Performance, Computing, and Communications, 2004", "journal": "IEEE International Conference on Performance, Computing, and Communications, 2004", "mag_field_of_study": [ "Computer Science" ] }
Water is the source of all things, so it can be said that without the sustainable development of water resources, there can be no sustainable development of human beings. In recent years, sudden water pollution accidents have occurred frequently. Emergency response plan optimization is the key to handling accidents. Nevertheless, the non-linear relationship between various indicators and emergency plans has greatly prevented researchers from making reasonable assessments. Thus, an integrated assessment method is proposed by incorporating an improved technique for order preference by similarity to ideal solution, Shannon entropy and a Coordinated development degree model to evaluate emergency plans. The Shannon entropy method was used to analyze different types of index values. TOPSIS is used to calculate the relative closeness to the ideal solution. The coordinated development degree model is applied to express the relationship between the relative closeness and inhomogeneity of the emergency plan. This method is tested in the decision support system of the Middle Route Construction and Administration Bureau, China. By considering the different nature of the indicators, the integrated assessment method is eventually proven as a highly realistic method for assessing emergency plans. The advantages of this method are more prominent when there are more indicators of the evaluation object and the nature of each indicator is quite different. In summary, this integrated assessment method can provide a targeted reference or guidance for emergency control decision makers.
Long et al. REF proposed an integrated assessment method by incorporating an improved technique for order preference by similarity to ideal solutions, Shannon entropy and a coordinated development degree model to evaluate emergency plans.
143429411
Integrated Assessment Method of Emergency Plan for Sudden Water Pollution Accidents Based on Improved TOPSIS, Shannon Entropy and a Coordinated Development Degree Model
{ "venue": null, "journal": "Sustainability", "mag_field_of_study": [ "Economics" ] }
Abstract. We present algorithms for finding large graph matchings in the streaming model. In this model, applicable when dealing with massive graphs, edges are streamed-in in some arbitrary order rather than residing in randomly accessible memory. For ǫ > 0, we achieve a 1 1+ǫ approximation for maximum cardinality matching and a 1 2+ǫ approximation to maximum weighted matching. Both algorithms use a constant number of passes andÕ(|V |) space.
When considering multi-pass algorithms, REF gave a (1 − ε)-approximation algorithm using
15737329
Finding graph matchings in data streams
{ "venue": "APPROX-RANDOM", "journal": null, "mag_field_of_study": [ "Mathematics", "Computer Science" ] }
The AIAA Modeling and Simulation Technical Committee has worked for several years to develop a standard by which the information needed to develop physics-based models of aircraft can be specified. The purpose of this standard is to provide a well-defined set of information, definitions, data tables and axis systems so that cooperating organizations can transfer a model from one simulation facility to another with maximum efficiency.
The American Institute of Aeronautics and Astronautics (AIAA) Modeling and Simulation Technical Committee has proposed a standard REF for the interchange of simulation modeling data of a vehicle or an aircraft between different simulation facilities.
62723172
FLIGHT DYNAMIC MODEL EXCHANGE USING XML
{ "venue": "AIAA Modeling and Simulation Technologies Conference and Exhibit", "journal": "AIAA Modeling and Simulation Technologies Conference and Exhibit", "mag_field_of_study": [ "Computer Science" ] }
ABSTRACT Quantifying the impact of scientific papers objectively is crucial for research output assessment, which subsequently affects institution and country rankings, research funding allocations, academic recruitment, and national/international scientific priorities. While most of the assessment schemes based on publication citations may potentially be manipulated through negative citations, in this paper, we explore the conflict of interest (COI) relationships and discover negative citations and subsequently weaken the associated citation strength. Positive and negative COI-distinguished objective rank algorithm (PANDORA) has been developed, which captures the positive and negative COI, together with the positive and negative suspected COI relationships. In order to alleviate the influence caused by negative COI relationship, collaboration times, collaboration time span, citation times, and citation time span are employed to determine the citing strength; while for positive COI relationship, we regard it as normal citation relationship. Furthermore, we calculate the impact of scholarly papers by PageRank and HITS algorithms, based on a credit allocation algorithm which is utilized to assess the impact of institutions fairly and objectively. Experiments are conducted on the publication data set from American Physical Society data set, and the results demonstrate that our method significantly outperforms the current solutions in recommendation intensity of list R at top-K and Spearman's rank correlation coefficient at top-K. INDEX TERMS Conflict of interest, negative citations, impact evaluation.
Bai et al. REF first explored the conflict of interest (COI) relationships to discover negative citations and weaken the associated citation strength.
6340052
The Role of Positive and Negative Citations in Scientific Evaluation
{ "venue": "IEEE Access", "journal": "IEEE Access", "mag_field_of_study": [ "Computer Science" ] }
Abstract-There is a huge increase of interest for time series methods and techniques. Virtually every piece of information collected from human, natural, and biological processes is susceptible to changes over time, and the study of how these changes occur is a central issue in fully understanding such processes. Among all time series mining tasks, classification is likely to be the most prominent one. In time series classification there is a significant body of empirical research that indicates that k-nearest neighbor rule in the time domain is very effective. However, certain time series features are not easily identified in this domain and a change in representation may reveal some significant and unknown features. In this work, we propose the use of recurrence plots as representation domain for time series classification. Our approach measures the similarity between recurrence plots using Campana-Keogh (CK-1) distance, a Kolmogorov complexitybased distance that uses video compression algorithms to estimate image similarity. We show that recurrence plots allied to CK-1 distance lead to significant improvements in accuracy rates compared to Euclidean distance and Dynamic Time Warping in several data sets. Although recurrence plots cannot provide the best accuracy rates for all data sets, we demonstrate that we can predict ahead of time that our method will outperform the time representation with Euclidean and Dynamic Time Warping distances.
Silva and colleagues proposed the use of RPs as representation domain for time series classification REF .
6008338
Time Series Classification Using Compression Distance of Recurrence Plots
{ "venue": "2013 IEEE 13th International Conference on Data Mining", "journal": "2013 IEEE 13th International Conference on Data Mining", "mag_field_of_study": [ "Computer Science" ] }
Image segmentation is one of the most important tasks in medical image analysis and is often the first and the most critical step in many clinical applications. In brain MRI analysis, image segmentation is commonly used for measuring and visualizing the brain's anatomical structures, for analyzing brain changes, for delineating pathological regions, and for surgical planning and image-guided interventions. In the last few decades, various segmentation techniques of different accuracy and degree of complexity have been developed and reported in the literature. In this paper we review the most popular methods commonly used for brain MRI segmentation. We highlight differences between them and discuss their capabilities, advantages, and limitations. To address the complexity and challenges of the brain MRI segmentation problem, we first introduce the basic concepts of image segmentation. Then, we explain different MRI preprocessing steps including image registration, bias field correction, and removal of nonbrain tissue. Finally, after reviewing different brain MRI segmentation methods, we discuss the validation problem in brain MRI segmentation.
As reviewed in REF , various segmentation techniques have been introduced in the literature.
215221047
MRI Segmentation of the Human Brain: Challenges, Methods, and Applications
null
The capacity of a neural network to absorb information is limited by its number of parameters. Conditional computation, where parts of the network are active on a per-example basis, has been proposed in theory as a way of dramatically increasing model capacity without a proportional increase in computation. In practice, however, there are significant algorithmic and performance challenges. In this work, we address these challenges and finally realize the promise of conditional computation, achieving greater than 1000x improvements in model capacity with only minor losses in computational efficiency on modern GPU clusters. We introduce a Sparsely-Gated Mixture-of-Experts layer (MoE), consisting of up to thousands of feed-forward sub-networks. A trainable gating network determines a sparse combination of these experts to use for each example. We apply the MoE to the tasks of language modeling and machine translation, where model capacity is critical for absorbing the vast quantities of knowledge available in the training corpora. We present model architectures in which a MoE with up to 137 billion parameters is applied convolutionally between stacked LSTM layers. On large language modeling and machine translation benchmarks, these models achieve significantly better results than state-of-the-art at lower computational cost.
REF also proposed to use a MoEs for language modeling.
12462234
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
{ "venue": "ArXiv", "journal": "ArXiv", "mag_field_of_study": [ "Mathematics", "Computer Science" ] }
Word embeddings allow natural language processing systems to share statistical information across related words. These embeddings are typically based on distributional statistics, making it difficult for them to generalize to rare or unseen words. We propose to improve word embeddings by incorporating morphological information, capturing shared sub-word features. Unlike previous work that constructs word embeddings directly from morphemes, we combine morphological and distributional information in a unified probabilistic framework, in which the word embedding is a latent variable. The morphological information provides a prior distribution on the latent word embeddings, which in turn condition a likelihood function over an observed corpus. This approach yields improvements on intrinsic word similarity evaluations, and also in the downstream task of part-of-speech tagging.
REF incorporate morphological information as a prior distribution to improve word embeddings.
1524421
Morphological Priors for Probabilistic Neural Word Embeddings
{ "venue": "ArXiv", "journal": "ArXiv", "mag_field_of_study": [ "Computer Science" ] }
Traditional approaches to the task of ACE event extraction usually depend on manually annotated data, which is often laborious to create and limited in size. Therefore, in addition to the difficulty of event extraction itself, insufficient training data hinders the learning process as well. To promote event extraction, we first propose an event extraction model to overcome the roles overlap problem by separating the argument prediction in terms of roles. Moreover, to address the problem of insufficient training data, we propose a method to automatically generate labeled data by editing prototypes and screen out generated samples by ranking the quality. Experiments on the ACE2005 dataset demonstrate that our extraction model can surpass most existing extraction methods. Besides, incorporating our generation method exhibits further significant improvement. It obtains new state-of-the-art results on the event extraction task, including pushing the F1 score of trigger classification to 81.1%, and the F1 score of argument classification to 58.9%.
REF proposes a method to automatically generate labeled data by editing prototypes and screen out generated samples by ranking the quality.
196178503
Exploring Pre-trained Language Models for Event Extraction and Generation
{ "venue": "ACL", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
Abstract-While FPGA-based hardware accelerators have repeatedly been demonstrated as a viable option, their programmability remains a major barrier to their wider acceptance by application code developers. These platforms are typically programmed in a low level hardware description language, a skill not common among application developers and a process that is often tedious and error-prone. Programming FPGAs from high level languages would provide easier integration with software systems as well as open up hardware accelerators to a wider spectrum of application developers. In this paper, we present a major revision to the Riverside Optimizing Compiler for Configurable Circuits (ROCCC) designed to create hardware accelerators from C programs. Novel additions to ROCCC include (1) intuitive modular bottom-up design of circuits from C, and (2) separation of code generation from specific FPGA platforms. The additions we make do not introduce any new syntax to the C code and maintain the high level optimizations from the ROCCC system that generate efficient code. The modular code we support functions identically as software or hardware. Additionally, we enable user control of hardware optimizations such as systolic array generation and temporal common subexpression elimination. We evaluate the quality of the ROCCC 2.0 tool by comparing it to hand-written VHDL code. We show comparable clock frequencies and a 18% higher throughput. The productivity advantages of ROCCC 2.0 is evaluated using the metrics of lines of code and programming time showing an average of 15x improvement over hand-written VHDL.
Most notably, the Riverside Optimizing Compiler for Reconfigurable Circuits (ROCCC) 2.0 REF supports a subset of C and produces VHDL hardware accelerators.
10601863
Designing Modular Hardware Accelerators in C with ROCCC 2.0
{ "venue": "2010 18th IEEE Annual International Symposium on Field-Programmable Custom Computing Machines", "journal": "2010 18th IEEE Annual International Symposium on Field-Programmable Custom Computing Machines", "mag_field_of_study": [ "Computer Science" ] }
Abstract. We present a comprehensive approach to ontology evaluation and validation, which have become a crucial problem for the development of semantic technologies. Existing evaluation methods are integrated into one sigle framework by means of a formal model. This model consists, firstly, of a metaontology called O 2 , that characterises ontologies as semiotic objects. Based on O 2 and an analysis of existing methodologies, we identify three main types of measures for evaluation: structural measures, that are typical of ontologies represented as graphs; functional measures, that are related to the intended use of an ontology and of its components; and usability-profiling measures, that depend on the level of annotation of the considered ontology. The metaontology is then complemented with an ontology of ontology validation called oQual, which provides the means to devise the best set of criteria for choosing an ontology over others in the context of a given project. Finally, we provide a small example of how to apply oQual-derived criteria to a validation case.
The work proposed by REF defines some measures for assessing an ontology, and evaluates these measures by means of a meta-ontology against which the ontology under validation is compared.
1833696
Modelling Ontology Evaluation and Validation
{ "venue": "ESWC", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
Abstract: Electric power consumption short-term forecasting for individual households is an important and challenging topic in the fields of AI-enhanced energy saving, smart grid planning, sustainable energy usage and electricity market bidding system design. Due to the variability of each household's personalized activity, difficulties exist for traditional methods, such as auto-regressive moving average models, machine learning methods and non-deep neural networks, to provide accurate prediction for single household electric power consumption. Recent works show that the long short term memory (LSTM) neural network outperforms most of those traditional methods for power consumption forecasting problems. Nevertheless, two research gaps remain as unsolved problems in the literature. First, the prediction accuracy is still not reaching the practical level for real-world industrial applications. Second, most existing works only work on the one-step forecasting problem; the forecasting time is too short for practical usage. In this study, a hybrid deep learning neural network framework that combines convolutional neural network (CNN) with LSTM is proposed to further improve the prediction accuracy. The original short-term forecasting strategy is extended to a multi-step forecasting strategy to introduce more response time for electricity market bidding. Five real-world household power consumption datasets are studied, the proposed hybrid deep learning neural network outperforms most of the existing approaches, including auto-regressive integrated moving average (ARIMA) model, persistent model, support vector regression (SVR) and LSTM alone. In addition, we show a k-step power consumption forecasting strategy to promote the proposed framework for real-world application usage.
Yan et al. REF designed a hybrid deep learning methodology that integrates convolutional neural network (CNN) and long short term memory (LSTM) neural network to forecast power consumption values in every five minutes.
115973916
Multi-Step Short-Term Power Consumption Forecasting with a Hybrid Deep Learning Strategy
{ "venue": null, "journal": "Energies", "mag_field_of_study": [ "Engineering" ] }
In many collaborative systems, users can trigger the execution of commands in a process owned by another user. Unless the access rights of such processes are limited, any user in the collaboration can gain access to another's private files; execute applications on another user's behalf; or read public system files, such as the password file, on another user's machine. However, some applications require limited sharing of private files, so it may be desirable to grant access to these files for a specific purpose. Role-based access control (RBAC) models can be used to limit the access rights of processes, but current implementations do not enable users to flexibly control the access rights of a process at runtime. We define a discretionary access control model that enables principals to flexibly control the access rights of a collaborative process. We then specify the requirements of RBAC models necessary to implement this discretionary access control model.
Jaeger et al. REF present basic requirements for role-based access control within collaborative systems.
11078950
Requirements of role-based access control for collaborative systems
{ "venue": "RBAC '95", "journal": null, "mag_field_of_study": [ "Computer Science", "Business" ] }
Heterogeneous cellular networks (HCNs) are emerging as a promising candidate for the fifth-generation (5G) mobile network. With base stations (BSs) of small cells densely deployed, the cost-effective, flexible, and green backhaul solution has become one of the most urgent and critical challenges. With vast amounts of spectrum available, wireless backhaul in the millimeter-wave (mmWave) band is able to provide transmission rates of several gigabits per second. The mmWave backhaul utilizes beamforming to achieve directional transmission, and concurrent transmissions under low interlink interference can be enabled to improve network capacity. To achieve an energy-efficient solution for mmWave backhauling, we first formulate the problem of minimizing the energy consumption via concurrent transmission scheduling and power control into a mixed integer nonlinear program (MINLP). Then, we develop an energy-efficient and practical mmWave backhauling scheme, which consists of the maximum independent set (MIS)-based scheduling algorithm and the power control algorithm. We also theoretically analyze the conditions that our scheme reduces energy consumption, as well as the choice of the interference threshold. Through extensive simulations under various traffic patterns and system parameters, we demonstrate the superior performance of our scheme in terms of energy efficiency and analyze the choice of the interference threshold under different traffic loads, BS distributions, and the maximum transmission power.
Niu et al. REF proposed an energy efficient scheduling scheme for the mmWave backhaul network, which exploits concurrent transmissions to achieve higher energy efficiency.
3590963
Energy-Efficient Scheduling for mmWave Backhauling of Small Cells in Heterogeneous Cellular Networks
{ "venue": "IEEE Transactions on Vehicular Technology", "journal": "IEEE Transactions on Vehicular Technology", "mag_field_of_study": [ "Computer Science" ] }
Abstract-Locating content in decentralized peer-to-peer systems is a challenging problem. Gnutella, a popular file-sharing application, relies on flooding queries to all peers. Although flooding is simple and robust, it is not scalable. In this paper, we explore how to retain the simplicity of Gnutella, while addressing its inherent weakness: scalability. We propose a content location solution in which peers loosely organize themselves into an interest-based structure on top of the existing Gnutella network. Our approach exploits a simple, yet powerful principle called interest-based locality, which posits that if a peer has a particular piece of content that one is interested in, it is very likely that it will have other items that one is interested in as well. When using our algorithm, called interest-based shortcuts, a significant amount of flooding can be avoided, making Gnutella a more competitive solution. In addition, shortcuts are modular and can be used to improve the performance of other content location mechanisms including distributed hash table schemes. We demonstrate the existence of interest-based locality in five diverse traces of content distribution applications, two of which are traces of popular peer-to-peer file-sharing applications. Simulation results show that interest-based shortcuts often resolve queries quickly in one peer-to-peer hop, while reducing the total load in the system by a factor of 3 to 7.
In a similar vein, Sripanidkulchai et al. REF used interested-based locality to organize nodes into interest-based structure, by which a significant amount of flooding on Gnutella-like systems can be avoided.
277780
Efficient content location using interest-based locality in peer-to-peer systems
{ "venue": "IEEE INFOCOM 2003. Twenty-second Annual Joint Conference of the IEEE Computer and Communications Societies (IEEE Cat. No.03CH37428)", "journal": "IEEE INFOCOM 2003. Twenty-second Annual Joint Conference of the IEEE Computer and Communications Societies (IEEE Cat. No.03CH37428)", "mag_field_of_study": [ "Computer Science" ] }
Insect behaviour is an important research topic in plant protection. To study insect behaviour accurately, it is necessary to observe and record their flight trajectory quantitatively and precisely in three dimensions (3D). The goal of this research was to analyse frames extracted from videos using Kernelized Correlation Filters (KCF) and Background Subtraction (BS) (KCF-BS) to plot the 3D trajectory of cabbage butterfly (P. rapae). Considering the experimental environment with a wind tunnel, a quadrature binocular vision insect video capture system was designed and applied in this study. The KCF-BS algorithm was used to track the butterfly in video frames and obtain coordinates of the target centroid in two videos. Insect behaviour has become an important research direction in the field of plant protection 1 . Behavioural research may inform methods for biological control 2,3 , biological model construction 4,5 and plant-insect interactions 6, 7 . To study the behaviour of flying insects accurately, it is necessary to observe and record their flight trajectory quantitatively and precisely in three dimensions (3D). Traditional detection methods for insect behaviour depend mainly on direct and manual observation, which are complicated by arbitrary qualification, wasting of human resources and low effectiveness 8 . Recent developments in computer vision have stimulated the application of these techniques to insect tracking [9] [10] [11] . Straw et al. 12 used three cameras to three-dimensionally track flying animals and can also track flies and birds, but the use of three cameras increased the difficulty of matching and the amount of 3D coordinate calculations. Okubo et al. 13 obtained images of a group of mosquitoes with a single camera and constructed the 3D trajectory of mosquitoes based on the geometric relationship between the mosquito group and its shadow on a white background. Stowers et al. 14 used the FreemoVR platform to establish a height-aversion assay in mice and studied visuomotor effects in Drosophila and zebrafish. However, this method was not directly suitable for the investigation of behaviours for which stereopsis is important because it rendered visual stimuli in a perspective-correct manner for a single viewpoint. Jantzen and Eisner 15 implemented Lepidoptera's 3D trajectory tracking, and Lihoreau et al. 16 obtained the three dimensional foraging flights of bumblebees. However, in these studies, the experimental environment was relatively simple, and the target was obvious. Automated image-based tracking has been applied for outdoor research, and the imaging method includes thermal infrared , and harmonic radar 20 methods. Xu et al. 21 proposed a method for the 3D observation of fish based on a single camera. A waterproof mirror was installed above an experimental fish tank to simulate a camera shooting from top to bottom. Although monocular vision was able to determine a 3D trajectory, the shooting video was largely influenced by environmental factors, and the calculation process was complex. Hardie and Powell 22 investigated the use of two or more parallel cameras to obtain an image sequence
Yang et al. REF present a method to analyze frames extracted from videos using kernelized correlation filters (KCF) and background subtraction (BS) (KCF-BS) to plot the 3D trajectory of cabbage butterfly.
49414493
Target tracking and 3D trajectory acquisition of cabbage butterfly (P. rapae) based on the KCF-BS algorithm
{ "venue": "Scientific Reports", "journal": "Scientific Reports", "mag_field_of_study": [ "Computer Science", "Medicine" ] }
Effective file transfer between vehicles is fundamental to many emerging vehicular infotainment applications in the highway Vehicular Ad Hoc Networks (VANETs), such as content distribution and social networking. However, due to fast mobility, the connection between vehicles tends to be short-lived and lossy, which makes intact file transfer extremely challenging. To tackle this problem, we presents a novel Cluster-based File Transfer (CFT) scheme for highway VANETs in this paper. With CFT, when a vehicle requests a file, the transmission capacity between the resource vehicle and the destination vehicle is evaluated. If the requested file can be successfully transferred over the direct Vehicular-to-Vehicular (V2V) connection, the file transfer will be completed by the resource and the destination themselves. Otherwise, a cluster will be formed to help the file transfer. As a fully-distributed scheme that relies on the collaboration of cluster members, CFT does not require any assistance from roadside units or access points. Our experimental results indicate that CFT outperforms the existing file transfer schemes for highway VANETs. IEEE ICC 2017 Ad-Hoc and Sensor Networking Symposium 978-1-4673-8999-0/17/$31.00 ©2017 IEEE
Ref. REF proposes a high-integrity file transfer scheme for VANETs on highways named Cluster-based File Transfer (CFT) scheme.
18939669
CFT: A Cluster-based File Transfer Scheme for highway VANETs
{ "venue": "2017 IEEE International Conference on Communications (ICC)", "journal": "2017 IEEE International Conference on Communications (ICC)", "mag_field_of_study": [ "Computer Science" ] }
Abstract: Feature selection is an important step in building accurate classifiers and provides better understanding of the data sets. In this paper, we propose a feature subset selection method based on high-dimensional mutual information. We also propose to use the entropy of the class attribute as a criterion to determine the appropriate subset of features when building classifiers. We prove that if the mutual information between a feature set X and the class attribute Y equals to the entropy of Y , then X is a Markov Blanket of Y . We show that in some cases, it is infeasible to approximate the high-dimensional mutual information with algebraic combinations of pairwise mutual information in any forms. In addition, the exhaustive searches of all combinations of features are prerequisite for finding the optimal feature subsets for classifying these kinds of data sets. We show that our approach outperforms existing filter feature subset selection methods for most of the 24 selected benchmark data sets.
In REF , authors measure the relationship between the candidate feature subsets U and the class attribute y exploiting the results on the Discrete Function Learning (DFS) algorithm jointly to the high-dimensional mutual information evaluation.
12877142
A Feature Subset Selection Method Based On High-Dimensional Mutual Information
{ "venue": "Entropy", "journal": "Entropy", "mag_field_of_study": [ "Mathematics", "Computer Science" ] }
The concept of contagion has steadily expanded from its original grounding in epidemic disease to describe a vast array of processes that spread across networks, notably social phenomena such as fads, political opinions, the adoption of new technologies, and financial decisions. Traditional models of social contagion have been based on physical analogies with biological contagion, in which the probability that an individual is affected by the contagion grows monotonically with the size of his or her "contact neighborhood"-the number of affected individuals with whom he or she is in contact. Whereas this contact neighborhood hypothesis has formed the underpinning of essentially all current models, it has been challenging to evaluate it due to the difficulty in obtaining detailed data on individual network neighborhoods during the course of a large-scale contagion process. Here we study this question by analyzing the growth of Facebook, a rare example of a social process with genuinely global adoption. We find that the probability of contagion is tightly controlled by the number of connected components in an individual's contact neighborhood, rather than by the actual size of the neighborhood. Surprisingly, once this "structural diversity" is controlled for, the size of the contact neighborhood is in fact generally a negative predictor of contagion. More broadly, our analysis shows how data at the size and resolution of the Facebook network make possible the identification of subtle structural signals that go undetected at smaller scales yet hold pivotal predictive roles for the outcomes of social processes. social networks | systems S ocial networks play host to a wide range of important social and nonsocial contagion processes (1-8). The microfoundations of social contagion can, however, be significantly more complex, as social decisions can depend much more subtly on social network structure (9-17). In this study we show how the details of the network neighborhood structure can play a significant role in empirically predicting the decisions of individuals. We perform our analysis on two social contagion processes that take place on the social networking site Facebook: the process whereby users join the site in response to an invitation e-mail from an existing Facebook user (henceforth termed "recruitment") and the process whereby users eventually become engaged users after joining (henceforth termed "engagement"). Although the two processes we study formally pertain to Facebook, their details differ considerably; the consistency of our results across these differing processes, as well as across different national populations (Materials and Methods), suggests that the phenomena we observe are not specific to any one modality or locale. The social network neighborhoods of individuals commonly consist of several significant and well-separated clusters, reflecting distinct social contexts within an individual's life or life history (18) (19) (20) . We find that this multiplicity of social contexts, which we term structural diversity, plays a key role in predicting the decisions of individuals that underlie the social contagion processes we study. We develop means of quantifying such structural diversity for network neighborhoods, broadly applicable at many different scales. The recruitment process we study primarily features small neighborhoods, but the on-site neighborhoods that we study in the context of engagement can be considerably larger. For small neighborhoods, structural diversity is succinctly measured by the number of connected components of the neighborhood. For larger neighborhoods, however, merely counting connected components fails to distinguish how substantial the components are in their size and connectivity. To determine whether the structural diversity of on-site neighborhoods is a strong predictor of on-site engagement, we evaluate several variations of the connected component concept that identify and enumerate substantial structural contexts within large neighborhood graphs. We find that all of the different structural diversity measures we consider robustly predict engagement. For both recruitment and engagement, structural diversity emerges as an important predictor for the study of social contagion processes.
The concept of structural diversity was first proposed by Ugander et al. REF , who found that the user recruitment rate in Facebook is determined by the variety of an individual's contact neighborhood, rather than the size of his or her neighborhood.
12993348
Structural diversity in social contagion
{ "venue": "Proceedings of the National Academy of Sciences of the United States of America", "journal": "Proceedings of the National Academy of Sciences of the United States of America", "mag_field_of_study": [ "Psychology", "Medicine", "Computer Science" ] }
We study online social networks in which relationships can be either positive (indicating relations such as friendship) or negative (indicating relations such as opposition or antagonism). Such a mix of positive and negative links arise in a variety of online settings; we study datasets from Epinions, Slashdot and Wikipedia. We find that the signs of links in the underlying social networks can be predicted with high accuracy, using models that generalize across this diverse range of sites. These models provide insight into some of the fundamental principles that drive the formation of signed links in networks, shedding light on theories of balance and status from social psychology; they also suggest social computing applications by which the attitude of one user toward another can be estimated from evidence provided by their relationships with other members of the surrounding social network. ACM 978-1-60558-799-8/10/04. fundamental question is then the following: How does the sign of a given link interact with the pattern of link signs in its local vicinity, or more broadly throughout the network? Moreover, what are the plausible configurations of link signs in real social networks? Answers to these questions can help us reason about how negative relationships are used in online systems, and answers that generalize across multiple domains can help to illuminate some of the underlying principles. Effective answers to such questions can also help inform the design of social computing applications in which we attempt to infer the (unobserved) attitude of one user toward another, using the positive and negative relations that have been observed in the vicinity of this user. Indeed, a common task in online communities is to suggest new relationships to a user, by proposing the formation of links to other users with whom one shares friends, interests, or other properties. The challenge here is that users may well have pre-existing attitudes and opinions -both positive and negative -towards others with whom they share certain characteristics, and hence before arbitrarily making such suggestions to users, it is important to be able to estimate these attitudes from existing evidence in the network. For example, if A is known to dislike people that B likes, this may well provide evidence about A's attitude toward B. Edge Sign Prediction. With this in mind, we begin by formulating a concrete underlying task -the edge sign prediction problem -for which we can directly evaluate and compare different approaches. The edge sign prediction problem is defined as follows. Suppose we are given a social network with signs on all its edges, but the sign on the edge from node u to node v, denoted s(u, v), has been "hidden." How reliably can we infer this sign s(u, v) using the information provided by the rest of the network? Note that this problem is both a concrete formulation of our basic questions about the typical patterns of link signs, and also a way of approaching our motivating application of inferring unobserved attitudes among users of social computing sites. There is an analogy here to the link prediction problem for social networks [16] ; in the same way that link prediction is used to to infer latent relationships that are present but not recorded by explicit links, the sign prediction problem can be used to estimate the sentiment of individuals toward each other, given information about other sentiments in the network. In studying the sign prediction problem, we are following an experimental framework articulated by Guha et al. in their study of trust and distrust on Epinions [8] . We extend their approach in a number of directions. First, where their goal was to evaluate propagation algorithms based on exponentiating the adjacency matrix, we approach the problem using a machine-learning framework that enables us to evaluate which of a range of structural features are most informative for the prediction task. Using this framework, we also obtain significantly improved performance on the task itself.
Furthermore, it is found that triads in a social network can help to explore the voting behavior REF ).
7119014
Predicting positive and negative links in online social networks
{ "venue": "WWW '10", "journal": null, "mag_field_of_study": [ "Mathematics", "Computer Science", "Physics" ] }
Biometric features such as fingerprints, iris patterns, and face features help to identify people and restrict access to secure areas by performing advanced pattern analysis and matching. Face recognition is one of the most promising biometric methodologies for human identification in a non-cooperative security environment. However, the recognition results obtained by face recognition systems are affected by several variations that may happen to the patterns in an unrestricted environment. As a result, several algorithms have been developed for extracting different facial features for face recognition. Due to the various possible challenges of data captured at different lighting conditions, viewing angles, facial expressions, and partial occlusions in natural environmental conditions, automatic facial recognition still remains as a difficult issue that needs to be resolved. In this paper, we propose a novel approach to tackling some of these issues by analyzing the local textural descriptions for facial feature representation. The textural information is extracted by an enhanced local binary pattern (ELBP) description of all the local regions of the face. The relationship of each pixel with respect to its neighborhood is extracted and employed to calculate the new representation. ELBP reconstructs a much better textural feature extraction vector from an original gray level image in different lighting conditions. The dimensionality of the texture image is reduced by principal component analysis performed on each local face region. Each low dimensional vector representing a local region is now weighted based on the significance of the sub-region. The weight of each sub-region is determined by employing the local variance estimate of the respective region, which represents the significance of the region. The final facial textural feature vector is obtained by concatenating the reduced dimensional weight sets of all the modules (sub-regions) of the face image. Experiments conducted on various popular face databases show promising performance of the proposed algorithm in varying lighting, expression, and partial occlusion conditions. Four databases were used for testing the performance of the proposed system: Yale Face database, Extended Yale Face database B, Japanese Female Facial Expression database, and CMU AMP Facial Expression database. The experimental results in all four databases show the effectiveness of the proposed system. Also, the computation cost is lower because of the simplified calculation steps. Research work is progressing to investigate the effectiveness of the proposed face recognition method on pose-varying conditions as well. It is envisaged that a multilane approach of trained frameworks at different pose bins and an appropriate voting strategy would lead to a good recognition rate in such situation.
Reference REF estimated the weight of each subregion by employing the local variance.
120403361
Adaptive weighted local textural features for illumination, expression, and occlusion invariant face recognition
{ "venue": "Electronic Imaging", "journal": null, "mag_field_of_study": [ "Physics", "Engineering" ] }
In this paper, we consider sentence simplification as a special form of translation with the complex sentence as the source and the simple sentence as the target. We propose a Tree-based Simplification Model (TSM), which, to our knowledge, is the first statistical simplification model covering splitting, dropping, reordering and substitution integrally. We also describe an efficient method to train our model with a large-scale parallel dataset obtained from the Wikipedia and Simple Wikipedia. The evaluation shows that our model achieves better readability scores than a set of baseline systems.
Sentence Simplification For statistical modeling, REF proposed a tree-based sentence simplification model drawing inspiration from statistical machine translation.
15636533
A Monolingual Tree-based Translation Model for Sentence Simplification
{ "venue": "COLING", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
Abstract| Accessing many data sources aggravates problems for users of heterogeneous distributed databases. Database administrators must deal with fragile mediators, that is, mediators with schemas and views that must be signi cantly changed to incorporate a new data source. When implementing translators of queries from mediators to data sources, database implementors must deal with data sources that do not support all the functionality required by mediators. Application programmers must deal with graceless failures for unavailable data sources. Queries simply return failure and no further information when data sources are unavailable for query processing. The Distributed Information Search COmponent ( Disco) addresses these problems. Data modeling techniques manage the connections to data sources, and sources can be added transparently to the users and applications. The interface between mediators and data sources exibly handles di erent query languages and different data source functionality. Query rewriting and optimization techniques rewrite queries so they are e ciently evaluated by sources. Query processing and evaluation semantics are developed to process queries over unavailable data sources. In this article we describe (a) the distributed mediator architecture of Disco (b) the data model and its modeling of data source connections (c) the interface to underlying data sources and the query rewriting process and (d) query processing semantics. We describe several advantages of our system.
The authors of REF propose distributed mediator architecture with a flexible interface between mediators and data sources that efficiently handles different query languages and different data source functionality.
10684299
Scaling access to heterogeneous data sources with DISCO
{ "venue": "IEEE Transactions on Knowledge and Data Engineering", "journal": "IEEE Transactions on Knowledge and Data Engineering", "mag_field_of_study": [ "Computer Science" ] }
Configuration and customization choices arise due to the heterogeneous and scalable aspect of the cloud computing paradigm. To avoid being restricted to a given cloud and ensure application requirements, using several clouds to deploy a multi-cloud configuration is recommended but introduces several challenges due to the amount of providers and their intrinsic variability. In this paper, we present a modeldriven approach based on Feature Models (fms) originating from Software Product Lines (spl) to handle cloud variability and then manage and create cloud configurations. We combine it with ontologies, used to model the various semantics of cloud systems. The approach takes into consideration application technical requirements as well as non-functional ones to provide a set of valid cloud or multi-cloud configurations and is implemented in a framework named Saloon.
Quinton et al. REF have established an architecture based on feature models and ontologies to describe and model cloud computing systems, reducing variability when working with multi-cloud configurations.
18471299
Towards multi-cloud configurations using feature models and ontologies
{ "venue": "MultiCloud '13", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
In this paper, we propose a deep learning based vehicle trajectory prediction technique which can generate the future trajectory sequence of surrounding vehicles in real time. We employ the encoder-decoder architecture which analyzes the pattern underlying in the past trajectory using the long short-term memory (LSTM) based encoder and generates the future trajectory sequence using the LSTM based decoder. This structure produces the K most likely trajectory candidates over occupancy grid map by employing the beam search technique which keeps the K locally best candidates from the decoder output. The experiments conducted on highway traffic scenarios show that the prediction accuracy of the proposed method is significantly higher than the conventional trajectory prediction techniques.
SEQ2SEQ REF presents a new LSTM based encoder-decoder network to predict trajectories into an occupancy grid map.
3653452
Sequence-to-Sequence Prediction of Vehicle Trajectory via LSTM Encoder-Decoder Architecture
{ "venue": "2018 IEEE Intelligent Vehicles Symposium (IV)", "journal": "2018 IEEE Intelligent Vehicles Symposium (IV)", "mag_field_of_study": [ "Computer Science", "Mathematics" ] }
The purpose of this paper is to describe current work in specification and development of Web-based 3D standards and tools. The paper presents a Web3D application for military education and training currently in progress for the Defense Modeling and Simulation Office and the Marine Corps Combat Development Command. Web-based 3D content development tools continue to evolve. The Web3D Consortium (http://www.web3d.org) promotes the specification and development of tools for exploiting 3D graphics in the World Wide Web. Current work is focused on developing the next-generation specification for Web3D, referred to as Extensible 3D (X3D -http://www.web3d.org/x3d.html). X3D is a scene graph architecture and encoding that improves on the Virtual Reality Modeling Language (VRML) international standard (VRML 97, ISO/IEC 14772-1:1997). X3D uses the Extensible Markup Language (XML) to express the geometry and behavior capabilities of VRML. Primary target applications for X3D are electronic commerce product and technology demonstration, visual simulation, database visualization, advertising and web page animation, augmented news and documentaries, training, games, entertainment, and education. This paper discusses how X3D will address shortcomings of VRML 97, VRML 97 compatibility, interoperation with other relevant standards, tighter media integration, improved visual quality, component-based approach, file format issues, and time to market. The paper will also present the current progress of the X3D Working Group in developing the runtime core specification, component specifications, advanced graphics extensions (including GeoVRML for geographic representations and H-Anim for humanoid models), XML 3D tags definition, text file format, binary file format, conversion and transformation of existing VRML 97 content, conformance testing, demonstrations, and implementations. The Defense Modeling and Simulation Office (DMSO) and Marine Corps Combat Development Command (Training and Education Command) have tasked the Naval Postgraduate School (NPS) to perform research toward development of a scenario authoring and Web-based visualization capability. Prototyping activities are employing Web-based technologies for information content (XML) and 3D graphical content (X3D) to create an initial presentation of an amphibious operation. The envisioned full capability will represent critical aspects of the battlespace, such as terrain, force d ispositions, maneuvers, fires, coordination measures, and timing. This paper discusses technical challenges in representing complex military operations in Web environments and describes work in progress to demonstrate application of Web-based technologies to create and explore complex, multi-dimensional operational scenarios. The paper describes application of the capabilities into other education and training domains.
Other work focusing on web-based technologies to defense applications focused more on 3D technologies and authoring REF .
1471024
Web-Based 3D Technology for Scenario Authoring and Visualization: The SAVAGE Project
{ "venue": null, "journal": null, "mag_field_of_study": [ "Computer Science" ] }
< Suitably configured electrochemical sensors can be used for air quality studies. < Evidence of performance of electrochemical sensors at parts-per-billion levels. < Sensors are sensitive, low noise, highly linear and generally highly selective. < Measurement density (space and time) unachievable using current methods. < Show low-cost air quality sensor networks are now feasible for widespread use. a r t i c l e i n f o Measurements at appropriate spatial and temporal scales are essential for understanding and monitoring spatially heterogeneous environments with complex and highly variable emission sources, such as in urban areas. However, the costs and complexity of conventional air quality measurement methods means that measurement networks are generally extremely sparse. In this paper we show that miniature, low-cost electrochemical gas sensors, traditionally used for sensing at parts-per-million (ppm) mixing ratios can, when suitably configured and operated, be used for parts-per-billion (ppb) level studies for gases relevant to urban air quality. Sensor nodes, in this case consisting of multiple individual electrochemical sensors, can be low-cost and highly portable, thus allowing the deployment of scalable high-density air quality sensor networks at fine spatial and temporal scales, and in both static and mobile configurations. In this paper we provide evidence for the performance of electrochemical sensors at the parts-per-billion level, and then outline results obtained from deployments of networks of sensor nodes in both an autonomous, high-density, static network in the wider Cambridge (UK) area, and as mobile networks for quantification of personal exposure. Examples are presented of measurements obtained with both highly portable devices held by pedestrians and cyclists, and static devices attached to street furniture. The widely varying mixing ratios reported by this study confirm that the urban environment cannot be fully characterised using sparse, static networks, and that measurement networks with higher resolution (both spatially and temporally) are required to quantify air quality at the scales which are present in the urban environment. We conclude that the instruments described here, and the low-cost/high-density measurement philosophy which underpins it, have the potential to provide a far more complete assessment of the high-granularity air quality structure generally observed in the urban environment, and could ultimately be used for quantification of human exposure as well as for monitoring and legislative purposes.
The first scenario is feasible with a proper configuration of low-cost sensors, because in this way, sensors commonly used for measuring at the parts-per-million (ppm) level can provide reliable results in the parts-per-billion (ppb) scale REF .
18110517
The use of electrochemical sensors for monitoring urban air quality in low-cost, high-density networks
{ "venue": null, "journal": "Atmospheric Environment", "mag_field_of_study": [ "Chemistry" ] }
Abstract-Behavioral economics tells us that emotions can profoundly affect individual behavior and decision-making. Does this also apply to societies at large, i.e. can societies experience mood states that affect their collective decision making? By extension is the public mood correlated or even predictive of economic indicators? Here we investigate whether measurements of collective mood states derived from large-scale Twitter feeds are correlated to the value of the Dow Jones Industrial Average (DJIA) over time. We analyze the text content of daily Twitter feeds by two mood tracking tools, namely OpinionFinder that measures positive vs. negative mood and Google-Profile of Mood States (GPOMS) that measures mood in terms of 6 dimensions (Calm, Alert, Sure, Vital, Kind, and Happy). We cross-validate the resulting mood time series by comparing their ability to detect the public's response to the presidential election and Thanksgiving day in 2008. A Granger causality analysis and a Self-Organizing Fuzzy Neural Network are then used to investigate the hypothesis that public mood states, as measured by the OpinionFinder and GPOMS mood time series, are predictive of changes in DJIA closing values. Our results indicate that the accuracy of DJIA predictions can be significantly improved by the inclusion of specific public mood dimensions but not others. We find an accuracy of 87.6% in predicting the daily up and down changes in the closing values of the DJIA and a reduction of the Mean Average Percentage Error by more than 6%. Index Terms-stock market prediction -twitter -mood analysis.
REF ) measure collective mood states (positive, negative, calm, alert, sure, vital, kind and happy) through sentiment analysis applied to more than 9 million tweets posted in 2008.
14727513
Twitter mood predicts the stock market
{ "venue": "Journal of Computational Science, 2(1), March 2011, Pages 1-8", "journal": null, "mag_field_of_study": [ "Computer Science", "Physics" ] }
We consider the problem of recommending the best set of k items when there is an inherent ordering between items, expressed as a set of prerequisites (e.g., the movie 'Godfather I' is a prerequisite of 'Godfather II'). Since this general problem is computationally intractable, we develop 3 approximation algorithms to solve this problem for various prerequisite structures (e.g., chain graphs, AND graphs, AND-OR graphs). We derive worst-case bounds for these algorithms for these structures, and experimentally evaluate these algorithms on synthetic data. We also develop an algorithm to combine solutions in order to generate even better solutions, and compare the performance of this algorithm with the other three.
Recommendation with prerequisites was studied in REF , in which the goal is to recommend the best set of k items when there is an inherent ordering between items.
7975439
Evaluating, combining and generalizing recommendations with prerequisites
{ "venue": "CIKM '10", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
Lacking realistic ground truth data, image denoising techniques are traditionally evaluated on images corrupted by synthesized i. i. d. Gaussian noise. We aim to obviate this unrealistic setting by developing a methodology for benchmarking denoising techniques on real photographs. We capture pairs of images with different ISO values and appropriately adjusted exposure times, where the nearly noise-free low-ISO image serves as reference. To derive the ground truth, careful post-processing is needed. We correct spatial misalignment, cope with inaccuracies in the exposure parameters through a linear intensity transform based on a novel heteroscedastic Tobit regression model, and remove residual low-frequency bias that stems, e.g., from minor illumination changes. We then capture a novel benchmark dataset, the Darmstadt Noise Dataset (DND), with consumer cameras of differing sensor sizes. One interesting finding is that various recent techniques that perform well on synthetic noise are clearly outperformed by BM3D on photographs with real noise. Our benchmark delineates realistic evaluation scenarios that deviate strongly from those commonly used in the scientific literature.
A careful recent evaluation with real data found that BM3D outperforms more recent techniques on real images REF .
9715523
Benchmarking Denoising Algorithms with Real Photographs
{ "venue": "2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)", "journal": "2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)", "mag_field_of_study": [ "Computer Science" ] }
Abstract-A central problem in releasing aggregate information about sensitive data is to do so accurately while providing a privacy guarantee on the output. Recent work focuses on the class of linear queries, which include basic counting queries, data cubes, and contingency tables. The goal is to maximize the utility of their output, while giving a rigorous privacy guarantee. Most results follow a common template: pick a "strategy" set of linear queries to apply to the data, then use the noisy answers to these queries to reconstruct the queries of interest. This entails either picking a strategy set that is hoped to be good for the queries, or performing a costly search over the space of all possible strategies. In this paper, we propose a new approach that balances accuracy and efficiency: we show how to improve the accuracy of a given query set by answering some strategy queries more accurately than others. This leads to an efficient optimal noise allocation for many popular strategies, including wavelets, hierarchies, Fourier coefficients and more. For the important case of marginal queries we show that this strictly improves on previous methods, both analytically and empirically. Our results also extend to ensuring that the returned query answers are consistent with an (unknown) data set at minimal extra cost in terms of time and noise.
Given a workload of queries, Yaroslavtsev et al. REF introduced a solution to balance accuracy and efficiency by answering some queries more accurately than others.
9105079
Accurate and efficient private release of datacubes and contingency tables
{ "venue": "2013 IEEE 29th International Conference on Data Engineering (ICDE)", "journal": "2013 IEEE 29th International Conference on Data Engineering (ICDE)", "mag_field_of_study": [ "Computer Science" ] }
In this work, we propose an efficient selective retransmission method for multiple-input and multipleoutput (MIMO) wireless systems under orthogonal frequency-division multiplexing (OFDM) signaling. A typical received OFDM frame may have some symbols in error, which results in a retransmission of the entire frame. Such a retransmission is often unnecessary, and to avoid this, we propose a method to selectively retransmit symbols that correspond to poor-quality subcarriers. We use the condition numbers of the subcarrier channel matrices of the MIMO-OFDM system as a quality measure. The proposed scheme is embedded in the modulation layer and is independent of conventional hybrid automatic repeat request (HARQ) methods. The receiver integrates the original OFDM and the punctured retransmitted OFDM signals for more reliable detection. The targeted retransmission results in fewer negative acknowledgements from conventional HARQ algorithms, which results in increasing bandwidth and power efficiency. We investigate the efficacy of the proposed method for optimal and suboptimal receivers. The simulation results demonstrate the efficacy of the proposed method on throughput for MIMO-OFDM systems.
The work in REF investigates selective retransmission for MIMO-OFDM system using condition number of the channel matrix of sub-carriers as a channel quality measure.
31411245
Bandwidth-Efficient Selective Retransmission for MIMO-OFDM Systems
{ "venue": null, "journal": "Etri Journal", "mag_field_of_study": [ "Computer Science" ] }
Motivation: Supervised machine learning is commonly applied in genomic research to construct a classifier from the training data that is generalizable to predict independent testing data. When test datasets are not available, cross-validation is commonly used to estimate the error rate. Many machine learning methods are available, and it is well known that no universally best method exists in general. It has been a common practice to apply many machine learning methods and report the method that produces the smallest cross-validation error rate. Theoretically, such a procedure produces a selection bias. Consequently, many clinical studies with moderate sample sizes (e.g. n = 30-60) risk reporting a falsely small cross-validation error rate that could not be validated later in independent cohorts. Results: In this article, we illustrated the probabilistic framework of the problem and explored the statistical and asymptotic properties. We proposed a new bias correction method based on learning curve fitting by inverse power law (IPL) and compared it with three existing methods: nested cross-validation, weighted mean correction and Tibshirani-Tibshirani procedure. All methods were compared in simulation datasets, five moderate size real datasets and two large breast cancer datasets. The result showed that IPL outperforms the other methods in bias correction with smaller variance, and it has an additional advantage to extrapolate error estimates for larger sample sizes, a practical feature to recommend whether more samples should be recruited to improve the classifier and accuracy. An R package 'MLbias' and all source files are publicly available.
REF proposed a resampling-based inverse power law (IPL) method for bias correction and compared its performance to those of TT, NCV, and WMC/WMCS on both simulated and real datasets.
1705607
Bias correction for selecting the minimal-error classifier from many machine learning models
{ "venue": "Bioinformatics", "journal": "Bioinformatics", "mag_field_of_study": [ "Computer Science", "Medicine" ] }
Abstract-Although high dynamic range (HDR) imaging has gained great popularity and acceptance in both the scientific and commercial domains, the relationship between perceptually accurate, content-independent dynamic range and objective measures has not been fully explored. In this paper, a new methodology for perceived dynamic range evaluation of complex stimuli in HDR conditions is proposed. A sUbjective study with 20 participants was conducted and correlations between mean opinion scores (MOS) and three image features were analyzed. Strong Spearman correlations between MOS and objective DR measure and between MOS and image key were found. An exploratory analysis reveals that additional image characteristics should be considered when modeling perceptually-based dynamic range metrics. Finally, one of the outcomes of the study is the perceptually annotated HDR image dataset with MOS values, that can be used for HDR imaging algorithms and metric validation, content selection and analysis of aesthetic image attributes.
The work of Hulusic et al. REF introduces a subjective measurement methodology for the perceived dynamic range.
8426670
Perceived dynamic range of HDR images
{ "venue": "2016 Eighth International Conference on Quality of Multimedia Experience (QoMEX)", "journal": "2016 Eighth International Conference on Quality of Multimedia Experience (QoMEX)", "mag_field_of_study": [ "Computer Science" ] }
In this paper, a new computer tomography (CT) lung nodule computer-aided detection (CAD) method is proposed for detecting both solid nodules and ground-glass opacity (GGO) nodules (part solid and nonsolid). This method consists of several steps. First, the lung region is segmented from the CT data using a fuzzy thresholding method. Then, the volumetric shape index map, which is based on local Gaussian and mean curvatures, and the "dot" map, which is based on the eigenvalues of a Hessian matrix, are calculated for each voxel within the lungs to enhance objects of a specific shape with high spherical elements (such as nodule objects). The combination of the shape index (local shape information) and "dot" features (local intensity dispersion information) provides a good structure descriptor for the initial nodule candidates generation. Antigeometric diffusion, which diffuses across the image edges, is used as a preprocessing step. The smoothness of image edges enables the accurate calculation of voxel-based geometric features. Adaptive thresholding and modified expectationmaximization methods are employed to segment potential nodule objects. Rule-based filtering is first used to remove easily dismissible nonnodule objects. This is followed by a weighted support vector machine (SVM) classification to further reduce the number of false positive (FP) objects. The proposed method has been trained and validated on a clinical dataset of 108 thoracic CT scans using a wide range of tube dose levels that contain 220 nodules (185 solid nodules and 35 GGO nodules) determined by a ground truth reading process. The data were randomly split into training and testing datasets. The experimental results using the independent dataset indicate an average detection rate of 90.2%, with approximately 8.2 FP/scan. Some challenging nodules such as nonspherical nodules and low-contrast part-solid and nonsolid nodules were identified, while most tissues such as blood vessels were excluded. The method's high detection rate, fast computation, and applicability to different imaging conditions and nodule types shows much promise for clinical applications.
Xujiong Ye et al. in REF presented a new computer tomography (CT) lung nodule computer-aided detection (CAD) method.
349469
Shape-Based Computer-Aided Detection of Lung Nodules in Thoracic CT Images
{ "venue": "IEEE Transactions on Biomedical Engineering", "journal": "IEEE Transactions on Biomedical Engineering", "mag_field_of_study": [ "Medicine", "Computer Science" ] }
In real world complex networks, the importance of a node depends on two important parameters: 1. characteristics of the node, and 2. the context of the given application. The current literature contains several centrality measures that have been defined to measure the importance of a node based on the given application requirements. These centrality measures assign a centrality value to each node that denotes its importance index. But in real life applications, we are more interested in the relative importance of the node that can be measured using its centrality rank based on the given centrality measure. To compute the centrality rank of a node, we need to compute the centrality value of all the nodes and compare them to get the rank. This process requires the entire network. So, it is not feasible for real-life applications due to the large size and dynamic nature of real world networks. In the present project, we aim to propose fast and efficient methods to estimate the global centrality rank of a node without computing the centrality value of all nodes. These methods can be further extended to estimate the rank without having the entire network. The proposed methods use the structural behavior of centrality measures, sampling techniques, or the machine learning models. In this work, we also discuss how to apply these methods for degree and closeness centrality rank estimation.
Researchers have focussed to propose fast and efficient methods to identify influential nodes and their ranking in the given network REF .
22872146
Global Rank Estimation
{ "venue": "ArXiv", "journal": "ArXiv", "mag_field_of_study": [ "Computer Science", "Physics" ] }
We propose a simple and efficient method for exploiting synthetic images when training a Deep Network to predict a 3D pose from an image. The ability of using synthetic images for training a Deep Network is extremely valuable as it is easy to create a virtually infinite training set made of such images, while capturing and annotating real images can be very cumbersome. However, synthetic images do not resemble real images exactly, and using them for training can result in suboptimal performance. It was recently shown that for exemplar-based approaches, it is possible to learn a mapping from the exemplar representations of real images to the exemplar representations of synthetic images. In this paper, we show that this approach is more general, and that a network can also be applied after the mapping to infer a 3D pose: At run-time, given a real image of the target object, we first compute the features for the image, map them to the feature space of synthetic images, and finally use the resulting features as input to another network which predicts the 3D pose. Since this network can be trained very effectively by using synthetic images, it performs very well in practice, and inference is faster and more accurate than with an exemplar-based approach. We demonstrate our approach on the LINEMOD dataset for 3D object pose estimation from color images, and the NYU dataset for 3D hand pose estimation from depth maps. We show that it allows us to outperform the state-of-the-art on both datasets.
REF mapped real image features to the feature space of synthetic images and used the mapped information as an input to a task-specific network, trained on synthetic data only.
4331981
Feature Mapping for Learning Fast and Accurate 3D Pose Inference from Synthetic Images
{ "venue": "2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition", "journal": "2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition", "mag_field_of_study": [ "Computer Science" ] }
Morphological analysis of Arabic language is computationally intensive, has numerous forms and rules, and intrinsically parallel. The investigation presented in this paper confirms that the effective development of parallel algorithms and the derivation of corresponding processors in hardware enable implementations with appealing performance characteristics. The presented developments of parallel hardware comprise the application of a variety of algorithm modelling techniques, strategies for concurrent processing, and the creation of pioneering hardware implementations that target modern programmable devices. The investigation includes the creation of a linguistic-based stemmer for Arabic verb root extraction with extended infix processing to attain high-levels of accuracy. The implementations comprise three versions, namely, software, non-pipelined processor, and pipelined processor with high throughput. The targeted systems are high-performance multi-core processors for software implementations and high-end Field Programmable Gate Array systems for hardware implementations. The investigation includes a thorough evaluation of the methodology, and performance and accuracy analyses of the developed software and hardware implementations. The developed processors achieved significant speedups over the software implementation. The developed stemmer for verb root extraction with infix processing attained accuracies of 87% and 90.7% for analyzing the texts of the Holy Quran and its Chapter 29 -Surat Al-Ankabut.
Research on parallel algorithm-based processors and the development of a suitable processor in hardware has led to the development of a high-performance multi-core processor system for software implementation and a high-end programmable gate array system for hardware implementation REF .
64550954
Parallel Hardware for Faster Morphological Analysis
{ "venue": "Comp. Inf. Sc. 30(2018) 531-546", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
In this paper we address three different computer vision tasks using a single multiscale convolutional network architecture: depth prediction, surface normal estimation, and semantic labeling. The network that we develop is able to adapt naturally to each task using only small modifications, regressing from the input image to the output map directly. Our method progressively refines predictions using a sequence of scales, and captures many image details without any superpixels or low-level segmentation. We achieve state-of-the-art performance on benchmarks for all three tasks.
In REF , an adaptive, multi-scale CNN architecture was proposed to jointly perform depth prediction, surface normal estimation and semantic labeling.
102496818
Predicting Depth, Surface Normals and Semantic Labels with a Common Multi-scale Convolutional Architecture
{ "venue": "2015 IEEE International Conference on Computer Vision (ICCV)", "journal": "2015 IEEE International Conference on Computer Vision (ICCV)", "mag_field_of_study": [ "Computer Science" ] }
ABSTRACT When limb-disabled patients want to activate the nurse emergency call system, adjust the temperature on air-conditioners, switch On or Off lights, watch television, or have other urgent needs, and require assistance from other people, their needs cannot be readily met if there is no one around. All above devices need to be operated manually or with a remote control. Thus, disabled patients cannot operate by themselves, and this also increases the burden of the nursing assistants. In order to improve the above issue, a novel non-contact control system is designed to allow disabled patients activating the nurse emergency call system and adjusting other appliances. Disabled patients can stare at the control icons of a visual stimulus generator, which accompanies a wearable electroencephalogram (EEG) acquisition device with non-contact dry electrode for monitoring patients' EEG signals and converting the EEG signals into relevant control commands via the signal processing system, to achieve the function of nurse calling and the effect of controlling the devices in a hospital. The result shows that the proposed system could effectively monitor and convert EEG signals and achieve the control effects for which disabled a patient desires in a hospital. Therefore, the non-contact control system provides a new control framework for the facilities in a hospital, which significantly assist patients with limb disability. INDEX TERMS Nurse emergency call system, visual stimulus generator, wireless EEG acquisition device, non-contact control system, disabled patients.
The patients can wear EEG acquisition device with electrodes for monitoring patient EEG signal to convert into relevant commands for adjusting the devices REF .
6777254
Novel Non-Contact Control System for Medical Healthcare of Disabled Patients
{ "venue": "IEEE Access", "journal": "IEEE Access", "mag_field_of_study": [ "Computer Science" ] }
Abstract-Distributed applications running inside cloud systems are prone to performance anomalies due to various reasons such as resource contentions, software bugs, and hardware failures. One big challenge for diagnosing an abnormal distributed application is to pinpoint the faulty components. In this paper, we present a black-box online fault localization system called FChain that can pinpoint faulty components immediately after a performance anomaly is detected. FChain first discovers the onset time of abnormal behaviors at different components by distinguishing the abnormal change point from many change points caused by normal workload fluctuations. Faulty components are then pinpointed based on the abnormal change propagation patterns and inter-component dependency relationships. FChain performs runtime validation to further filter out false alarms. We have implemented FChain on top of the Xen platform and tested it using several benchmark applications (RUBiS, Hadoop, and IBM System S). Our experimental results show that FChain can quickly pinpoint the faulty components with high accuracy within a few seconds. FChain can achieve up to 90% higher precision and 20% higher recall than existing schemes. FChain is nonintrusive and light-weight, which imposes less than 1% overhead to the cloud system.
FChain REF monitors the execution of distributed applications to detect performance anomalies and to pinpoint the faulty component by reconstructing the propagation patterns of abnormal change points.
6384187
FChain: Toward Black-Box Online Fault Localization for Cloud Systems
{ "venue": "2013 IEEE 33rd International Conference on Distributed Computing Systems", "journal": "2013 IEEE 33rd International Conference on Distributed Computing Systems", "mag_field_of_study": [ "Computer Science" ] }
Recent work has shown the feasibility of single-channel fullduplex wireless physical layer, allowing nodes to send and receive in the same frequency band at the same time. In this report, we first design and implement a real-time 64-subcarrier 10 MHz full-duplex OFDM physical layer, FD-PHY. The proposed FD-PHY not only allows synchronous full-duplex transmissions but also selective asynchronous fullduplex modes. Further, we show that in over-the-air experiments using optimal antenna placement on actual devices, the self-interference can be suppressed upto 80dB, which is 10dB more than prior reported results. Then we propose a full-duplex MAC protocol, FD-MAC, which builds on IEEE 802.11 with three new mechanisms -shared random backoff, header snooping and virtual backoffs. The new mechanisms allow FD-MAC to discover and exploit fullduplex opportunities in a distributed manner. Our over-theair tests show over 70% throughput gains from using fullduplex over half-duplex in realistically used cases.
Sahai et al. proposed a full-duplex MAC (FD-MAC) protocol to evenly provide transmission opportunities and reduce collision probability REF .
9756937
Pushing the limits of Full-duplex: Design and Real-time Implementation
{ "venue": "ArXiv", "journal": "ArXiv", "mag_field_of_study": [ "Computer Science" ] }
Despite existing work on ensuring generalization of neural networks in terms of scale sensitive complexity measures, such as norms, margin and sharpness, these complexity measures do not offer an explanation of why neural networks generalize better with over-parametrization. In this work we suggest a novel complexity measure based on unit-wise capacities resulting in a tighter generalization bound for two layer ReLU networks. Our capacity bound correlates with the behavior of test error with increasing network sizes, and could potentially explain the improvement in generalization with over-parametrization. We further present a matching lower bound for the Rademacher complexity that improves over previous capacity lower bounds for neural networks.
REF investigate the Rademacher complexity of two-layer networks.
44130076
Towards Understanding the Role of Over-Parametrization in Generalization of Neural Networks
{ "venue": "ArXiv", "journal": "ArXiv", "mag_field_of_study": [ "Mathematics", "Computer Science" ] }
There exist several automatic verification tools of cryptographic protocols, but only few of them are able to check protocols in presence of algebraic properties. Most of these tools are dealing either with Exclusive-Or (XOR) and exponentiation properties, so-called Diffie-Hellman (DH). In the last few years, the number of these tools increased and some existing tools have been updated. Our aim is to compare their performances by analysing a selection of cryptographic protocols using XOR and DH. We compare execution time and memory consumption for different versions of the following tools OFMC, CL-Atse, Scyther, Tamarin, TA4SP, and extensions of ProVerif (XOR-ProVerif and DH-ProVerif). Our evaluation shows that in most of the cases the new versions of the tools are faster but consume more memory. We also show how the new tools: Tamarin, Scyther and TA4SP, can be compared to previous ones. We also discover and understand for the protocol IKEv2-DS a difference of modelling by the authors of different tools, which leads to different security results. Finally, for Exclusive-Or and Diffie-Hellman properties, we construct two families of protocols P xori and P dhi that allow us to clearly see for the first time the impact of the number of operators and variables in the tools' performances. S 3 A [DSV03] and [CE02]. All these tools can verify one or several security properties and rely on different theoretical approaches, e.g., rewriting, solving constraints system, SAT-solvers, resolution of Horn clauses, or tree automata etc. All these tools work in the symbolic world, where all messages are represented by an algebra of terms. Moreover, they also consider the well-known Dolev-Yao intruder model [DY81], where a powerful intruder is considered [Cer01]. This intruder controls the network, listens, stops, forges, replays or modifies some messages according to its capabilities and can play several sessions of a protocol. The perfect encryption hypothesis is often assumed, meaning that without the secret key associated to an encrypted message it is not possible to decrypt the cipher-text. In such model most of the tools are able to verify two security properties: secrecy and authentication. The first property ensures that an intruder cannot learn a secret message. The authentication property means that one participant of the protocol is sure to communicate with another one. Historically, formal methods have been developed for analysing cryptographic protocols after the flaw discovered by G. Lowe [Low96] 17 years after the publication of Needham-Schoreder protocol [NS78] . The security of this protocol has been proven for one session using the BAN logic in [BAN90,BM94]. The flaw discovered by G. Lowe [Low96] works because the intruder plays one session with Alice and in the same time a second one with Bob. In this second session, Bob believes that he is talking to Alice. Then the intruder learns the shared secret key that Bob thinks that he shares with Alice. This example clearly shows that even for a protocol of three messages the number of possible combinations outpaces the humans' capabilities. In presence of algebraic properties, the number of possible combinations to construct traces blows up. The situation is even worse because some attacks can be missed. Let consider the following 3-pass Shamir protocol composed of three messages, where {m} KA denotes the encryption of m with the secret key KA: This protocol works only if the encryption has the following algebraic property: {{m} KA } KB = {{m} KB } KA . In order to implement this protocol one can use the One Time Pad (OTP) encryption, also known as Vernam encryption because it is generally credited to Gilbert S. Vernam and Joseph O. Mauborgne, but indeed it was invented 35 years early by Franck Miller [Bel11]. The encryption of the message m with the key k is m ⊕ k. This encryption is perfectly secure according to Shanon information theory, meaning that without knowing the key no information about the message is leaked [Vau05,BJL + 10]. Moreover the OTP encryption is key commutative since: {{m} KA } KB = (m ⊕ KA) ⊕ KB = (m ⊕ KB) ⊕ KA = {{m} KB } KA . Unfortunately combining the
More recently, Lafourcade and Puys REF focus on performance analysis of a number of tools including a ProVerif extension and analysis of 21 cryptographic protocols dealing with Exclusive-Or (xor) and exponentiation properties like Diffie-Hellman (DH).
40438024
Performance Evaluations of Cryptographic Protocols Verification Tools Dealing with Algebraic Properties
{ "venue": "FPS", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
Abstract. It is common to use domain specific terminology -attributes -to describe the visual appearance of objects. In order to scale the use of these describable visual attributes to a large number of categories, especially those not well studied by psychologists or linguists, it will be necessary to find alternative techniques for identifying attribute vocabularies and for learning to recognize attributes without hand labeled training data. We demonstrate that it is possible to accomplish both these tasks automatically by mining text and image data sampled from the Internet. The proposed approach also characterizes attributes according to their visual representation: global or local, and type: color, texture, or shape. This work focuses on discovering attributes and their visual appearance, and is as agnostic as possible about the textual description.
Berg et al. REF identify attributes by mining text and images from the web.
1698147
Automatic Attribute Discovery and Characterization from Noisy Web Data
{ "venue": "ECCV", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
Abstract. Deep neural networks have enjoyed remarkable success for various vision tasks, however it remains challenging to apply CNNs to domains lacking a regular underlying structures such as 3D point clouds. Towards this we propose a novel convolutional architecture, termed SpiderCNN, to efficiently extract geometric features from point clouds. SpiderCNN is comprised of units called SpiderConv, which extend convolutional operations from regular grids to irregular point sets that can be embedded in R n , by parametrizing a family of convolutional filters. We design the filter as a product of a simple step function that captures local geodesic information and a Taylor polynomial that ensures the expressiveness. SpiderCNN inherits the multi-scale hierarchical architecture from classical CNNs, which allows it to extract semantic deep features. Experiments on ModelNet40[4] demonstrate that SpiderCNN achieves state-of-the-art accuracy 92.4% on standard benchmarks, and shows competitive performance on segmentation task.
SpiderCNN REF defines a continuous kernel function as a product of step function and a Taylor polyno-mial.
4536146
SpiderCNN: Deep Learning on Point Sets with Parameterized Convolutional Filters
{ "venue": "ArXiv", "journal": "ArXiv", "mag_field_of_study": [ "Computer Science" ] }
Abstract. We consider nondeterministic probabilistic programs with the most basic liveness property of termination. We present efficient methods for termination analysis of nondeterministic probabilistic programs with polynomial guards and assignments. Our approach is through synthesis of polynomial ranking supermartingales, that on one hand significantly generalizes linear ranking supermartingales and on the other hand is a counterpart of polynomial ranking-functions for proving termination of nonprobabilistic programs. The approach synthesizes polynomial ranking-supermartingales through Positivstellensatz's, yielding an efficient method which is not only sound, but also semi-complete over a large subclass of programs. We show experimental results to demonstrate that our approach can handle several classical programs with complex polynomial guards and assignments, and can synthesize efficient quadratic ranking-supermartingales when a linear one does not exist even for simple affine programs.
More general methods REF are able to synthesize polynomial ranking-supermartingales for proving termination.
2098319
Termination Analysis of Probabilistic Programs through Positivstellensatz's
{ "venue": "ArXiv", "journal": "ArXiv", "mag_field_of_study": [ "Computer Science" ] }
Alexis Mignon, Frédéric Jurie. PCCA: A new approach for distance learning from sparse pairwise constraints. This paper introduces Pairwise Constrained Component Analysis (PCCA), a new algorithm for learning distance metrics from sparse pairwise similarity/dissimilarity constraints in high dimensional input space, problem for which most existing distance metric learning approaches are not adapted. PCCA learns a projection into a low-dimensional space where the distance between pairs of data points respects the desired constraints, exhibiting good generalization properties in presence of high dimensional data. The paper also shows how to efficiently kernelize the approach. PCCA is experimentally validated on two challenging vision tasks, face verification and person re-identification, for which we obtain state-of-the-art results.
In addition, Mignon and Jurie proposed Pairwise Constrained Component Analysis (PCCA) to project the original data into a lower dimensional space REF , in which the distance between pairs has the desired properties.
425268
PCCA: A new approach for distance learning from sparse pairwise constraints
{ "venue": "2012 IEEE Conference on Computer Vision and Pattern Recognition", "journal": "2012 IEEE Conference on Computer Vision and Pattern Recognition", "mag_field_of_study": [ "Computer Science" ] }
We consider a linear stochastic bandit problem where the dimension K of the unknown parameter θ is larger than the sampling budget n. Since usual linear bandit algorithms have a regret of order O(K √ n), it is in general impossible to obtain a sub-linear regret without further assumption. In this paper we make the assumption that θ is S−sparse, i.e. has at most S−non-zero components, and that the set of arms is the unit ball for the ||.|| 2 norm. We combine ideas from Compressed Sensing and Bandit Theory to derive an algorithm with a regret bound in O(S √ n). We detail an application to the problem of optimizing a function that depends on many variables but among which only a small number of them (initially unknown) are relevant.
Similarly, REF also considered the high-dimensional stochastic linear bandits with sparsity, combining the ideas from compressed sensing and bandit theory.
7380181
Bandit Theory meets Compressed Sensing for high dimensional Stochastic Linear Bandit
{ "venue": "AISTATS", "journal": null, "mag_field_of_study": [ "Mathematics", "Computer Science" ] }
There has been an explosion of work in the vision & language community during the past few years from image captioning to video transcription, and answering questions about images. These tasks have focused on literal descriptions of the image. To move beyond the literal, we choose to explore how questions about an image are often directed at commonsense inference and the abstract events evoked by objects in the image. In this paper, we introduce the novel task of Visual Question Generation (VQG), where the system is tasked with asking a natural and engaging question when shown an image. We provide three datasets which cover a variety of images from object-centric to event-centric, with considerably more abstract training data than provided to state-of-the-art captioning systems thus far. We train and test several generative and retrieval models to tackle the task of VQG. Evaluation results show that while such models ask reasonable questions for a variety of images, there is still a wide gap with human performance which motivates further work on connecting images with commonsense knowledge and pragmatics. Our proposed task offers a new challenge to the community which we hope furthers interest in exploring deeper connections between vision & language.
Visual question generation was also studied in REF , with an emphasis on generating questions about images that are beyond the literal visual content of the image.
16227864
Generating Natural Questions About an Image
{ "venue": "ACL", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
We compare the usage of a Digital Library with many different categories of collections, by examining its log files for a period of twenty months, and we conclude that the access points that the users mostly refer to, depend heavily on the type of content of the collection, the detail of the existing metadata and the target user group. We also found that most users tend to use simple query structures (e.g. only one search term) and very few and primitive operations to accomplish their request. Furthermore, as they get more experienced, they reduce the number of operations in their sessions.
Sfakakis and Kapidakis REF compared in 2002 the usage of a Digital Library with many different categories of collections concluding that the access points the users mostly refer to depend heavily on the type of content of the collection, the details of the existing metadata and the target user group.
16602535
User behavior tendencies on data collections in a digital library
{ "venue": "In: Lecture Notes in Computer Science", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
In sensor networks, analyzing power consumption before actual deployment is crucial for maximizing service lifetime. This paper proposes an instruction-level power estimator (IPEN) for sensor networks. IPEN is an accurate and fine grain power estimation tool, using an instructionlevel simulator. It is independent of the operating system, so many different kinds of sensor node software can be simulated for estimation. We have developed the power model of a Micaz-compatible mote. The power consumption of the ATmega128L microcontroller is modeled with the base energy cost and the instruction overheads. The CC2420 communication component and other peripherals are modeled according to their operation states. The energy consumption estimation module profiles peripheral accesses and function calls while an application is running. IPEN has shown excellent power estimation accuracy, with less than 5% estimation error compared to real sensor network implementation. With IPEN's high precision instruction-level energy prediction, users can accurately estimate a sensor network's energy consumption and achieve fine-grained optimization of their software.
NQEM is high-fidelity instruction-level simulator, and it can be extended to the power estimator REF .
56101461
Instruction-Level Power Estimator for Sensor Networks
{ "venue": null, "journal": "Etri Journal", "mag_field_of_study": [ "Computer Science" ] }
The amount of sequence information in public repositories is growing at a rapid rate. Although these data are likely to contain clinically important information that has not yet been uncovered, our ability to effectively mine these repositories is limited. Here we introduce Sequence Bloom Trees (SBTs), a method for querying thousands of short-read sequencing experiments by sequence, 162 times faster than existing approaches. The approach searches large data archives for all experiments that involve a given sequence. We use SBTs to search 2,652 human blood, breast and brain RNA-seq experiments for all 214,293 known transcripts in under 4 days using less than 239 MB of RAM and a single CPU. Searching sequence archives at this scale and in this time frame is currently not possible using existing tools. The National Institutes of Health (NIH) Sequence Read Archive (SRA) 1 contains ~3 petabases of sequence information that can be used to answer biological questions that single experiments do not have the power to address. However, searching the entirety of such a database for a sequence has not been possible in reasonable computational time. Some progress has been made toward enabling sequence searches on large databases. The NIH SRA provides a sequence search functionality 2 ; however, the search is restricted to a limited number of experiments. Existing full-text indexing data structures such as Burrows-Wheeler transform 3 , FM-index 4 or others 5-7 are currently unable to mine data of this scale. Word-based indices 8, 9 , such as those used by internet search engines, are not appropriate for editdistance-based biological sequence searches. The sequence-specific solution caBLAST and its variants 10-12 require an index of known genomes, genes or proteins, and so cannot search for novel sequences. Further, none of these existing approaches are able to match a query sequence q that spans many short reads. Here, we use an indexing data structure, Sequence Bloom Tree (SBT), to identify all experiments in a database that contain a given query sequence q. A query is an arbitrary sequence, such as a transcript. The SBT index is independent of eventual queries, so the approach is not limited to searching for known sequences, and the index can be efficiently built and stored in limited additional space. It also does not require retaining the original sequence files and can be distributed separately from the data. SBTs are dynamic, allowing insertions and deletions of new experiments. A coarse-grained version of an SBT can be downloaded and subsequently refined as more specific results are needed. They can be searched using low memory for the existence of arbitrary query sequences. We show that SBTs can search large collections of RNA-seq experiments for a given transcript orders of magnitude faster than existing approaches. RESULTS SBTs create a hierarchy of compressed bloom filters 13, 14 , which efficiently store a set of items. Each bloom filter contains the set of k-mers (length-k subsequences) present within a subset of the sequencing experiments. SBTs are binary trees in which the sequencing experiments are associated with leaves, and each node v of the SBT contains a bloom filter that contains the set of k-mers present in any read in any experiment in the subtree rooted at v (Supplementary Fig. 1) . We reduced the space usage by using bloom filters that are compressed by the RRR 15 compression scheme (Online Methods). Hierarchies of bloom filters have been used for data management on distributed systems 16 . However, they have not previously been applied to sequence search, and we find that this allows us to tune the bloom filter error rate much higher than in other contexts (Theorem 2, Online Methods), vastly reducing the space requirements. Bloom filters have also been used for storing implicit de Bruijn graphs 17,18 , and one view of SBTs is as a generalization of this to multiple graphs. We used SBTs to search RNA-seq experiments for expressed isoforms. We built an SBT on 2,652 RNA-seq experiments in the SRA for human blood, breast and brain tissues ( Supplementary Table 1 ). The entire SBT required only 200 GB (2.3% of the size of the original sequencing data) ( Supplementary Table 2 ). For these data, construction of the tree took ≈2.5 min per file (Supplementary Table 3) . These experiments could be searched for a single transcript query in, on average, 20 min (Fig. 1) , using less than 239 MB of RAM with a single thread (Online Methods). We estimate the comparable search time using SRA-BLAST 2 or mapping by STAR 19 to be 2.2 d and 921 d, respectively (Online Methods), though SRA-BLAST and STAR return alignments whereas SBT does not. However, even a very fast aligner such as STAR cannot identify query-containing experiments as fast as SBT. We also tested batches of 100 queries and found SBT was an estimated 4,056 times faster than a batched version of the mapping 3 0 1 approach (Supplementary Fig. 2) . These queries were performed over varying sensitivity threshold θ (the minimum fraction of query k-mers that must exist in order to return a 'hit') as well as the transcripts per million (TPM) threshold used to select the query set ( Supplementary Figs. 3 and 4) . For approximately half of the queries, the upper levels of the SBT hierarchy provided substantial benefit, particularly on queries that were not expressed in any experiment ( Supplementary Fig. 5 and Supplementary Table 4 ). SBTs can speed up the use of algorithms, such as STAR or SRA-BLAST, by first ruling out experiments in which the query sequences are not present. This allows the subsequent processing time to scale with the size of the number of hits rather than the size of the database. We first used SBTs to filter the full dataset consisting of 2,652 human blood, breast and brain RNA-seq experiments. We then compared the performance of STAR or SRA-BLAST on the filtered dataset with the time to process the unfiltered dataset with these algorithms. Using SBTs to first filter the data reduced the overall query time of STAR or SRA-BLAST by a factor of ≈3 (Supplementary Fig. 6) . To analyze the accuracy of the SBT filter, we compared the experiments returned by SBT with those in which the query sequence was estimated to be expressed using Sailfish 20 . Because it is impractical to use existing tools to estimate expression over the entire set of experiments, we queried the entire tree, but estimated accuracy on a set of 100 random files on which we ran Sailfish (Fig. 2) . Three collections of representative queries were constructed using Sailfish, denoted by High, Medium and Low, which included transcripts of length >1,000 nt that were likely to be expressed at a higher, medium or low level in at least one experiment contained in the set of 100 experiments on which Sailfish was run. The High set was chosen to be 100 random transcripts with an estimated abundance of >1,000 TPM in at least one experiment. The Medium and Low query sets were similarly chosen randomly from among transcripts with >500 and >100 TPM, respectively. These Sailfish estimates were taken as the ground truth of expression for the query transcripts. Both false positives and false negatives can arise from a mismatch between SBT's definition of present (coverage of k-mers over a sufficient fraction of the query) and Sailfish's definition of expressed (as estimated by read mapping and an expectation-maximization inference). These two definitions are related, but not perfectly aligned, resulting in some disagreement that is quantified by the false-positive rates (FPR) and false-negative rates of Figure 2 . The observed false negatives are primarily driven by a few outlier queries for which the SBT reports no results but their expression is above the TPM threshold as estimated by Sailfish. This is supported by the fact that the average true-positive rate at θ = 0.7 for queries that return at least one file was 96-100%, and the median true-positive rate across all queries was 100% for all but the strictest θ (Fig. 2) . We used SBT to search all blood, brain and breast SRA sequencing runs for the expression of all 214,293 known human transcripts and used these results to identify tissue-specific transcripts ( Supplementary Table 5 and Supplementary Fig. 7) . This search took 3.3 d using a single thread (Supplementary Fig. 8 ). There are presently no search or alignment tools that can solve this scale of sequence search problem in a reasonable time frame, but we estimate an equivalent search using Sailfish would take 92 d. The speed and computational efficiency of SBTs will enable both individual laboratories and sequencing centers to support largescale sequence searches, not just for RNA-seq data, but for genomic and metagenomic collections as well. Researchers could search for conditions from among thousands that are likely to express a given novel isoform or use SBTs to identify metagenomic samples that are likely to contain a particular strain of bacteria. Fast search of this type will be essential to make good use of the ever-growing collection of available sequencing data. Currently, it is difficult to access all the relevant data relating to a particular research question from available sequencing experiments. Individual hospitals, sequencing centers, research consortia and research groups are collecting data at a rapid pace, and face the same difficulty of not being able to test computational hypotheses quickly or to find the relevant conditions for further study. SBTs enable the efficient mining of these data and could be used to uncover biological insights that can be revealed only through the analysis of multiple data sets from different sources. Furthermore, SBTs do not require prior knowledge about sequences of interest, making it possible to identify, for example, the expression of unknown isoforms or long noncoding RNAs. This algorithm makes it practical to search large sequencing repositories and may open up new uses for these rich collections of data. ( 1 5 -t h r e a d ) Time (min) Figure 1 Estimated running times of search tools for one transcript. The SBT per-query time was recorded using a maximum of a single filter in active memory and one thread. The other bars show the estimated time to achieve the same query results using SRA-BLAST and STAR.
The experiment discovery problem was first posed by Solomon and Kingsford REF , where they introduced the sequence Bloom tree (SBT).
18391638
Fast search of thousands of short-read sequencing experiments
{ "venue": "Nature Biotechnology", "journal": "Nature Biotechnology", "mag_field_of_study": [ "Biology", "Medicine" ] }
Abstract. One of the key differences between the learning mechanism of humans and Artificial Neural Networks (ANNs) is the ability of humans to learn one task at a time. ANNs, on the other hand, can only learn multiple tasks simultaneously. Any attempts at learning new tasks incrementally cause them to completely forget about previous tasks. This lack of ability to learn incrementally, called Catastrophic Forgetting, is considered a major hurdle in building a true AI system. In this paper, our goal is to isolate the truly effective existing ideas for incremental learning from those that only work under certain conditions. To this end, we first thoroughly analyze the current state of the art (iCaRL) method for incremental learning and demonstrate that the good performance of the system is not because of the reasons presented in the existing literature. We conclude that the success of iCaRL is primarily due to knowledge distillation and recognize a key limitation of knowledge distillation, i.e, it often leads to bias in classifiers. Finally, we propose a dynamic threshold moving algorithm that is able to successfully remove this bias. We demonstrate the effectiveness of our algorithm on CIFAR100 and MNIST datasets showing near-optimal results. Our implementation is available at : https://github.com/Khurramjaved96/ incremental-learning.
Similarly, Javed and Shafait REF learn an end-to-end classifier by proposing a dynamic threshold moving algorithm.
49655438
Revisiting Distillation and Incremental Classifier Learning
{ "venue": "ArXiv", "journal": "ArXiv", "mag_field_of_study": [ "Computer Science", "Mathematics" ] }
Three essential criteria are important for activity planning, including: (1) finding a group of attendees familiar with the initiator, (2) ensuring each attendee in the group to have tight social relations with most of the members in the group, and (3) selecting an activity period available for all attendees. Therefore, this paper proposes Social-Temporal Group Query to find the activity time and attendees with the minimum total social distance to the initiator. Moreover, this query incorporates an acquaintance constraint to avoid finding a group with mutually unfamiliar attendees. Efficient processing of the social-temporal group query is very challenging. We show that the problem is NP-hard via a proof and formulate the problem with Integer Programming. We then propose two efficient algorithms, SGSelect and STGSelect, which include effective pruning techniques and employ the idea of pivot time slots to substantially reduce the running time, for finding the optimal solutions. Experimental results indicate that the proposed algorithms are much more efficient and scalable. In the comparison of solution quality, we show that STGSelect outperforms the algorithm that represents manual coordination by the initiator.
Yang et al. REF try to find a group of attendees familiar with a given activity initiator, and ensure each attendee in the group to have tight social relations with most of the members in the group.
12382263
On Social-Temporal Group Query with Acquaintance Constraint
{ "venue": "PVLDB", "journal": "PVLDB", "mag_field_of_study": [ "Computer Science" ] }
Outsourcing of personal health record (PHR) has attracted considerable interest recently. It can not only bring much convenience to patients, it also allows efficient sharing of medical information among researchers. As the medical data in PHR is sensitive, it has to be encrypted before outsourcing. To achieve fine-grained access control over the encrypted PHR data becomes a challenging problem. In this paper, we provide an affirmative solution to this problem. We propose a novel PHR service system which supports efficient searching and fine-grained access control for PHR data in a hybrid cloud environment, where a private cloud is used to assist the user to interact with the public cloud for processing PHR data. In our proposed solution, we make use of attribute-based encryption (ABE) technique to obtain finegrained access control for PHR data. In order to protect the privacy of PHR owners, our ABE is anonymous. That is, it can hide the access policy information in ciphertexts. Meanwhile, our solution can also allow efficient fuzzy search over PHR data, which can greatly improve the system usability. We also provide security analysis to show Fatos Xhafa (Corresponding author ) that the proposed solution is secure and privacy-preserving. The experimental results demonstrate the efficiency of the proposed scheme.
In another study, REF proposed a PHR service system to provide the efficient searching, fine-grained access control, and PHR data sharing with anonymous ABE in the hybrid cloud environment.
28993394
An efficient PHR service system supporting fuzzy keyword search and fine-grained access control
{ "venue": null, "journal": "Soft Computing", "mag_field_of_study": [ "Computer Science" ] }
Automatic program comprehension is particularly useful when applied to sparse matrix codes, since it allows to abstract e.g. from specific sparse matrix storage formats used in the code. In this paper we describe SPARAMAT, a system for speculative automatic program comprehension suitable for sparse matrix codes, and its implementation.
Kebler and Smith REF described a system, SPARAMAT, for concept comprehension that is particularly suitable for sparse array codes.
11294367
The SPARAMAT approach to automatic comprehension of sparse matrix computations
{ "venue": "Proceedings Seventh International Workshop on Program Comprehension", "journal": "Proceedings Seventh International Workshop on Program Comprehension", "mag_field_of_study": [ "Computer Science" ] }
The World-Wide Web consists of a huge number of unstructured documents, but it also contains structured data in the form of HTML tables. We extracted 14.1 billion HTML tables from Google's general-purpose web crawl, and used statistical classification techniques to find the estimated 154M that contain high-quality relational data. Because each relational table has its own "schema" of labeled and typed columns, each such table can be considered a small structured database. The resulting corpus of databases is larger than any other corpus we are aware of, by at least five orders of magnitude. We describe the WebTables system to explore two fundamental questions about this collection of databases. First, what are effective techniques for searching for structured data at search-engine scales? Second, what additional power can be derived by analyzing such a huge corpus? First, we develop new techniques for keyword search over a corpus of tables, and show that they can achieve substantially higher relevance than solutions based on a traditional search engine. Second, we introduce a new object derived from the database corpus: the attribute correlation statistics database (AcsDB) that records corpus-wide statistics on cooccurrences of schema elements. In addition to improving search relevance, the AcsDB makes possible several novel applications: schema auto-complete, which helps a database designer to choose schema elements; attribute synonym finding, which automatically computes attribute synonym pairs for schema matching; and join-graph traversal, which allows a user to navigate between extracted schemas using automatically-generated join links.
Cafarella et al REF built a search engine based on a corpus of 14.1 billion HTML tables.
15642206
WebTables: exploring the power of tables on the web
{ "venue": "PVLDB", "journal": "PVLDB", "mag_field_of_study": [ "Computer Science" ] }
We analyze reported patches for three existing generate-andvalidate patch generation systems (GenProg, RSRepair, and AE). The basic principle behind generate-and-validate systems is to accept only plausible patches that produce correct outputs for all inputs in the validation test suite. Because of errors in the patch evaluation infrastructure, the majority of the reported patches are not plausiblethey do not produce correct outputs even for the inputs in the validation test suite. The overwhelming majority of the reported patches are not correct and are equivalent to a single modification that simply deletes functionality. Observed negative effects include the introduction of security vulnerabilities and the elimination of desirable functionality. We also present Kali, a generate-and-validate patch generation system that only deletes functionality. Working with a simpler and more effectively focused search space, Kali generates at least as many correct patches as prior GenProg, RSRepair, and AE systems. Kali also generates at least as many patches that produce correct outputs for the inputs in the validation test suite as the three prior systems. We also discuss the patches produced by ClearView, a generate-and-validate binary hot patching system that leverages learned invariants to produce patches that enable systems to survive otherwise fatal defects and security attacks. Our analysis indicates that ClearView successfully patches 9 of the 10 security vulnerabilities used to evaluate the system. At least 4 of these patches are correct.
Previous work shows that, contrary to the design principle of GenProg, RSRepair, and AE, the majority of the reported patches of these three systems are implausible due to errors in the patch validation REF .
6845282
An analysis of patch plausibility and correctness for generate-and-validate patch generation systems
{ "venue": "ISSTA 2015", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
Abstract. Taint-tracking is emerging as a general technique in software security to complement virtualization and static analysis. It has been applied for accurate detection of a wide range of attacks on benign software, as well as in malware defense. Although it is quite robust for tackling the former problem, application of taint analysis to untrusted (and potentially malicious) software is riddled with several difficulties that lead to gaping holes in defense. These holes arise not only due to the limitations of information flow analysis techniques, but also the nature of today's software architectures and distribution models. This paper highlights these problems using an array of simple but powerful evasion techniques that can easily defeat taint-tracking defenses. Given today's binary-based software distribution and deployment models, our results suggest that information flow techniques will be of limited use against future malware that has been designed with the intent of evading these defenses.
REF describe the evasion techniques that can easily defeat dynamic information flow analysis using control dependencies.
7068075
On the limits of information flow techniques for malware analysis and containment
{ "venue": "In DIMVA", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
Abstract-This paper focuses on scalability and robustness of spectral clustering for extremely large-scale datasets with limited resources. Two novel algorithms are proposed, namely, ultra-scalable spectral clustering (U-SPEC) and ultra-scalable ensemble clustering (U-SENC). In U-SPEC, a hybrid representative selection strategy and a fast approximation method for K-nearest representatives are proposed for the construction of a sparse affinity sub-matrix. By interpreting the sparse sub-matrix as a bipartite graph, the transfer cut is then utilized to efficiently partition the graph and obtain the clustering result. In U-SENC, multiple U-SPEC clusterers are further integrated into an ensemble clustering framework to enhance the robustness of U-SPEC while maintaining high efficiency. Based on the ensemble generation via multiple U-SEPC's, a new bipartite graph is constructed between objects and base clusters and then efficiently partitioned to achieve the consensus clustering result. It is noteworthy that both U-SPEC and U-SENC have nearly linear time and space complexity, and are capable of robustly and efficiently partitioning ten-million-level nonlinearly-separable datasets on a PC with 64GB memory. Experiments on various large-scale datasets have demonstrated the scalability and robustness of our algorithms. The MATLAB code and experimental data are available at https://www.researchgate.net/publication/330760669.
For better performance of spectral clustering over large-scale data with very limited resources, Huang et al. REF convert sparse sub-matrix as a bipartite graph and use transferred cut to obtain the clustering result.
67855861
Ultra-Scalable Spectral Clustering and Ensemble Clustering
{ "venue": "ArXiv", "journal": "ArXiv", "mag_field_of_study": [ "Computer Science", "Mathematics" ] }
Abstract Many real-world applications reveal difficulties in learning classifiers from imbalanced data. Although several methods for improving classifiers have been introduced, the identification of conditions for the efficient use of the particular method is still an open research problem. It is also worth to study the nature of imbalanced data, characteristics of the minority class distribution and their influence on classification performance. However, current studies on imbalanced data difficulty factors have been mainly done with artificial datasets and their conclusions are not easily applicable to the real-world problems, also because the methods for their identification are not sufficiently developed. In our paper, we capture difficulties of class distribution in real datasets by considering four types of minority class examples: safe, borderline, rare and outliers. First, we confirm their occurrence in real data by exploring multidimensional visualizations of selected datasets. Then, we introduce a method for an identification of these types of examples, which is based on analyzing a class distribution in a local neighbourhood of the considered example. Two ways of modeling this neighbourhood are presented: with k-nearest examples and with kernel functions. Experiments with artificial datasets show that these methods are able to re-discover simulated types of examples. Next contributions of this paper include carrying out a comprehensive experimental study with 26 real world imbalanced datasets, where (1) we identify new data characteristics basing on the analysis of types of minority examples; (2) we demonstrate that considering the results of this analysis allow to differentiate classification performance of popular classifiers and pre-processing methods and to evaluate their areas of competence. Finally, we highlight directions of exploiting the results of our analysis for developing new algorithms for learning classifiers and pre-processing methods.
In a study by Napierała and Stefanowski REF authors proposed a method for categorization of different types of minority objects.
17513719
Types of minority class examples and their influence on learning classifiers from imbalanced data
{ "venue": "Journal of Intelligent Information Systems", "journal": "Journal of Intelligent Information Systems", "mag_field_of_study": [ "Computer Science" ] }
Recent studies show that voltage scaling, which is an efficient energy management technique, has a direct and negative effect on system reliability because of the increased rate of transient faults (e.g., those induced by cosmic particles). In this article, we propose energy management schemes that explicitly take system reliability into consideration. The proposed reliability-aware energy management schemes dynamically schedule recoveries for tasks to be scaled down to recuperate the reliability loss due to energy management. Based on the amount of available slack, the application size, and the fault rate changes, we analyze when it is profitable to reclaim the slack for energy savings without sacrificing system reliability. Checkpoint technique is further explored to efficiently use the slack. Analytical and simulation results show that the proposed schemes can achieve comparable energy savings as ordinary energy management schemes (which are reliability-ignorant) while preserving system reliability. The ordinary energy management schemes that ignore the effects of voltage scaling on fault rate changes could lead to drastically decreased system reliability.
Zhu REF presented reliability-aware energy management schemes that dynamically schedule error recoveries for tasks to compensate for reliability loss due to dynamic voltage and frequency scaling.
2756669
Reliability-aware dynamic energy management in dependable embedded real-time systems
{ "venue": "TECS", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
Abstract-The emerging federated cloud paradigm advocates sharing of resources among cloud providers, to exploit temporal availability of resources and diversity of operational costs for job serving. While extensive studies exist on enabling interoperability across different cloud platforms, a fundamental question on cloud economics remains unanswered: When and how should a cloud trade VMs with others, such that its net profit is maximized over the long run? In order to answer this question by the federation, a number of important, correlated decisions, including job scheduling, server provisioning and resource pricing, need to be dynamically made, with long-term profit optimality being a goal. In this work, we design efficient algorithms for inter-cloud resource trading and scheduling in a federation of geo-distributed clouds. For VM trading among clouds, we apply a double auctionbased mechanism that is strategyproof, individual rational, and ex-post budget balanced. Coupling with the auction mechanism is an efficient, dynamic resource trading and scheduling algorithm, which carefully decides the true valuations of VMs in the auction, optimally schedules stochastic job arrivals with different SLAs onto the VMs, and judiciously turns on and off servers based on the current electricity prices. Through rigorous analysis, we show that each individual cloud, by carrying out our dynamic algorithm, can achieve a time-averaged profit arbitrarily close to the offline optimum.
Li et al. REF study the resource trading among multiple IaaS clouds and design a double auction mechanism.
11022899
Profit-maximizing virtual machine trading in a federation of selfish clouds
{ "venue": "2013 Proceedings IEEE INFOCOM", "journal": "2013 Proceedings IEEE INFOCOM", "mag_field_of_study": [ "Computer Science" ] }
Abstract-In this paper, an elecrocardiogram (ECG) compression algorithm, called analysis by synthesis ECG compressor (ASEC), is introduced. The ASEC algorithm is based on analysis by synthesis coding, and consists of a beat codebook, long and short-term predictors, and an adaptive residual quantizer. The compression algorithm uses a defined distortion measure in order to efficiently encode every heartbeat, with minimum bit rate, while maintaining a predetermined distortion level. The compression algorithm was implemented and tested with both the percentage rms difference (PRD) measure and the recently introduced weighted diagnostic distortion (WDD) measure. The compression algorithm has been evaluated with the MIT-BIH Arrhythmia Database. A mean compression rate of approximately 100 bits/s (compression ratio of about 30 : 1) has been achieved with a good reconstructed signal quality (WDD below 4% and PRD below 8%). The ASEC was compared with several well-known ECG compression algorithms and was found to be superior at all tested bit rates. A mean opinion score (MOS) test was also applied. The testers were three independent expert cardiologists. As in the quantitative test, the proposed compression algorithm was found to be superior to the other tested compression algorithms.
An ECG compression algorithm, called Analysis by Synthesis ECG compressor (ASEC), has been introduced in REF .
7689233
ECG signal compression using analysis by synthesis coding
{ "venue": "IEEE Transactions on Biomedical Engineering", "journal": "IEEE Transactions on Biomedical Engineering", "mag_field_of_study": [ "Computer Science", "Medicine" ] }
In the recent development of avionics systems, Integrated Modular Avionics (IMA) is advocated for next generation architecture that needs integration of mixedcriticality real-time applications. These integrated applications meet their own timing constraints while sharing avionics computer resources. To guarantee timing constraints and dependability of each application, an IMA-based system is equipped with the schemes for spatial and temporal partitioning. We refer the model as SP-RTS (Strongly Partitioned Real-Time System), which deals with processor partitions and communication channels as its basic scheduling entities. This paper presents a partition and channelscheduling algorithm for the SP-RTS. The basic idea of the algorithm is to use a two-level hierarchical schedule that activates partitions (or channels) following a distance-constraints guaranteed cyclic schedule and then dispatches tasks (or messages) according to a fixed priority schedule. To enhance schedulability, we devised heuristic algorithms for deadline decomposition and channel combining. The simulation results show the schedulability analysis of the two-level scheduling algorithm and the beneficial characteristics of the proposed deadline decomposition and channel combining algorithms.
Lee et al. REF presented a partition and channel-scheduling algorithm for the strong partitioned realtime system.
920212
Resource scheduling in dependable integrated modular avionics
{ "venue": "Proceeding International Conference on Dependable Systems and Networks. DSN 2000", "journal": "Proceeding International Conference on Dependable Systems and Networks. DSN 2000", "mag_field_of_study": [ "Computer Science" ] }
We propose a method for the task of identifying the general positions of users in online debates, i.e., support or oppose the main topic of an online debate, by exploiting local information in their remarks within the debate. An online debate is a forum where each user post an opinion on a particular topic while other users state their positions by posting their remarks within the debate. The supporting or opposing remarks are made by directly replying to the opinion, or indirectly to other remarks (to express local agreement or disagreement), which makes the task of identifying users' general positions difficult. A prior study has shown that a linkbased method, which completely ignores the content of the remarks, can achieve higher accuracy for the identification task than methods based solely on the contents of the remarks. In this paper, we show that utilizing the textual content of the remarks into the link-based method can yield higher accuracy in the identification task.
REF identify general user opinions in online debates.
18151048
Support or Oppose? Classifying Positions in Online Debates from Reply Activities and Opinion Expressions
{ "venue": "COLING - POSTERS", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
We propose the coupled generative adversarial network (CoGAN) framework for generating pairs of corresponding images in two different domains. It consists of a pair of generative adversarial networks, each responsible for generating images in one domain. We show that by enforcing a simple weight-sharing constraint, the CoGAN learns to generate pairs of corresponding images without existence of any pairs of corresponding images in the two domains in the training set. In other words, the CoGAN learns a joint distribution of images in the two domains from images drawn separately from the marginal distributions of the individual domains. This is in contrast to the existing multi-modal generative models, which require corresponding images for training. We apply the CoGAN to several pair image generation tasks. For each task, the GoGAN learns to generate convincing pairs of corresponding images. We further demonstrate the applications of the CoGAN framework for the domain adaptation and cross-domain image generation tasks.
CoupleGAN REF couples two GANs with shared weights to generate paired image samples.
10627900
Coupled Generative Adversarial Networks
{ "venue": "ArXiv", "journal": "ArXiv", "mag_field_of_study": [ "Computer Science" ] }
Text is not unadulterated fact. A text can make you laugh or cry but can it also make you short sell your stocks in company A and buy up options in company B? Research in the domain of finance strongly suggests that it can. Studies have shown that both the informational and affective aspects of news text affect the markets in profound ways, impacting on volumes of trades, stock prices, volatility and even future firm earnings. This paper aims to explore a computable metric of positive or negative polarity in financial news text which is consistent with human judgments and can be used in a quantitative analysis of news sentiment impact on financial markets. Results from a preliminary evaluation are presented and discussed.
This resembles previous work by REF who explored a computable metric of positive or negative polarity in financial news text which is consistent with human judgments and can be used in a quantitative analysis of news sentiment impact on financial markets.
6526153
Sentiment Polarity Identification in Financial News: A Cohesion-based Approach
{ "venue": "45th Annual Meeting of the Association of Computational Linguistics", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
Abstract. Garbage collectors are notoriously hard to verify, due to their low-level interaction with the underlying system and the general difficulty in reasoning about reachability in graphs. Several papers have presented verified collectors, but either the proofs were hand-written or the collectors were too simplistic to use on practical applications. In this work, we present two mechanically verified garbage collectors, both practical enough to use for real-world C# benchmarks. The collectors and their associated allocators consist of x86 assembly language instructions and macro instructions, annotated with preconditions, postconditions, invariants, and assertions. We used the Boogie verification generator and the Z3 automated theorem prover to verify this assembly language code mechanically. We provide measurements comparing the performance of the verified collector with that of the standard Bartok collectors on off-the-shelf C# benchmarks, demonstrating their competitiveness.
Hawblitzel and Petrank REF show that performant verified x86 code for simple mark-and-sweep and Cheney copying collectors can be developed using the Boogie verification condition generator and the Z3 automated theorem prover.
52865497
Automated Verification of Practical Garbage Collectors
{ "venue": "LMCS 6 (3:6) 2010", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
Automated gray matter segmentation of magnetic resonance imaging data is essential for morphometric analyses of the brain, particularly when large sample sizes are investigated. However, although detection of small structural brain differences may fundamentally depend on the method used, both accuracy and reliability of different automated segmentation algorithms have rarely been compared. Here, performance of the segmentation algorithms provided by SPM8, VBM8, FSL and FreeSurfer was quantified on simulated and real magnetic resonance imaging data. First, accuracy was assessed by comparing segmentations of twenty simulated and 18 real T1 images with corresponding ground truth images. Second, reliability was determined in ten T1 images from the same subject and in ten T1 images of different subjects scanned twice. Third, the impact of preprocessing steps on segmentation accuracy was investigated. VBM8 showed a very high accuracy and a very high reliability. FSL achieved the highest accuracy but demonstrated poor reliability and FreeSurfer showed the lowest accuracy, but high reliability. An universally valid recommendation on how to implement morphometric analyses is not warranted due to the vast number of scanning and analysis parameters. However, our analysis suggests that researchers can optimize their individual processing procedures with respect to final segmentation quality and exemplifies adequate performance criteria.
Eggert et al. REF analyzed and discussed several factors that may affect MRI segmentation in terms of the final segmentation quality and specific adequate performance criteria.
6018270
Accuracy and Reliability of Automated Gray Matter Segmentation Pathways on Real and Simulated Structural Magnetic Resonance Images of the Human Brain
{ "venue": "PLoS ONE", "journal": "PLoS ONE", "mag_field_of_study": [ "Medicine", "Computer Science" ] }
Abstract-The IEEE 802.11e Medium Access Control (MAC) for Quality-of-Service (QoS) support in 802.11 networks defines burst transmission and new acknowledgment (ACK) operations as optional mechanisms for increasing channel utilization. In this paper, we investigate how the performance of these new features is affected by the presence of fiber delay in high speed Wireless LAN (WLAN) over fiber networks. It is shown that the negative effect of the fiber delay on the throughput performance of the 802.11 MAC protocol can be significantly reduced when burst transmission is used with the Block or the No ACK policies. Index Terms-IEEE 802.11e, medium access control, Radio over Fiber, wireless LAN.
In REF , the authors research the impact that fiber delay has on the performance of the acknowledgment policies in fiber-fed WLANs.
22351476
Use of Different Acknowledgement Policies for Burst Transmission in Fiber-fed Wireless LANs
{ "venue": "IEEE Communications Letters", "journal": "IEEE Communications Letters", "mag_field_of_study": [ "Computer Science" ] }
Abstract-Web service technology aims to enable the interoperation of heterogeneous systems and the reuse of distributed functions in an unprecedented scale and has achieved significant success. There are still, however, challenges to realize its full potential. One of these challenges is to ensure the behavior of Web services consistent with their requirements. Monitoring events that are relevant to Web service requirements is, thus, an important technique. This paper introduces an online monitoring approach for Web service requirements. It includes a pattern-based specification of service constraints that correspond to service requirements, and a monitoring model that covers five kinds of system events relevant to client request, service response, application, resource, and management, and a monitoring framework in which different probes and agents collect events and data that are sensitive to requirements. The framework analyzes the collected information against the prespecified constraints, so as to evaluate the behavior and use of Web services. The prototype implementation and experiments with a case study shows that our approach is effective and flexible, and the monitoring cost is affordable.
The authors in Wang et al. REF introduce an online monitoring approach for Web service requirements, where monitoring code is embedded inside the target code.
11942877
An Online Monitoring Approach for Web Service Requirements
{ "venue": "IEEE Transactions on Services Computing", "journal": "IEEE Transactions on Services Computing", "mag_field_of_study": [ "Computer Science" ] }
In this paper, we propose a correlated and individual multi-modal deep learning (CIMDL) method for RGB-D object recognition. Unlike most conventional RGB-D object recognition methods which extract features from the RGB and depth channels individually, our CIMDL jointly learns feature representations from raw RGB-D data with a pair of deep neural networks, so that the sharable and modalspecific information can be simultaneously and explicitly exploited. Specifically, we construct a pair of deep residual networks for the RGB and depth data, and concatenate them at the top layer of the network with a loss function which learns a new feature space where both the correlated part and the individual part of the RGB-D information are well modelled. The parameters of the whole networks are updated by using the back-propagation criterion. Experimental results on two widely used RGB-D object image benchmark datasets clearly show that our method outperforms most of the state-of-the-art methods.
Wang et al. REF obtain the multi-modal feature by using a custom layer to separate the individual and correlated information of the extracted RGB and depth features.
6567742
Correlated and Individual Multi-Modal Deep Learning for RGB-D Object Recognition
{ "venue": "ArXiv", "journal": "ArXiv", "mag_field_of_study": [ "Computer Science" ] }
In this paper, we propose the "adversarial autoencoder" (AAE), which is a probabilistic autoencoder that uses the recently proposed generative adversarial networks (GAN) to perform variational inference by matching the aggregated posterior of the hidden code vector of the autoencoder with an arbitrary prior distribution. Matching the aggregated posterior to the prior ensures that generating from any part of prior space results in meaningful samples. As a result, the decoder of the adversarial autoencoder learns a deep generative model that maps the imposed prior to the data distribution. We show how the adversarial autoencoder can be used in applications such as semi-supervised classification, disentangling style and content of images, unsupervised clustering, dimensionality reduction and data visualization. We performed experiments on MNIST, Street View House Numbers and Toronto Face datasets and show that adversarial autoencoders achieve competitive results in generative modeling and semi-supervised classification tasks.
The Adversarial Autoencoders REF combines GAN and VAE.
5092785
Adversarial Autoencoders
{ "venue": "ArXiv", "journal": "ArXiv", "mag_field_of_study": [ "Computer Science" ] }
Abstract-This paper proposes a load balancing algorithm that determines the optimal load for each host so as to minimize the overall mean job response time in a distributed computer system that consists of heterogeneous hosts. The algorithm is a simplified and easily understandable version of the single-point algorithm originally presented by Tantawi and Towsley. Index Terms-Distributed computer systems, local area networks, optimal load, optimal static load balancing, single-point algorithm, star network configurations.
For example, in REF , Kim and Kameda proposed a simplified load balancing algorithm, which targets at the minimizing the overall mean job response time via adjusting the each node's load in a distributed computer system that consists of heterogeneous hosts, based on the single-point algorithm originally presented by Tantawi and Towsley.
18337053
An algorithm for optimal static load balancing in distributed computer systems
{ "venue": null, "journal": "IEEE Transactions on Computers", "mag_field_of_study": [ "Computer Science" ] }
Abstract. This article demonstrates how the User Requirements Notation (URN) can be used to model business processes. URN combines goals and scenarios in order to help capture and reason about user requirements prior to detailed design. In terms of application areas, this emerging standard targets reactive systems in general, with a particular focus on telecommunications systems and services. This article argues that the URN can also be applied to business process modeling. To this end, it illustrates the notation, its use, and its benefits with a supply chain management case study. It then briefly compares this approach to related modeling approaches, namely, use case-driven design, service-oriented architecture analysis, and conceptual value modeling. The authors hope that a URN-based approach will provide usable and useful tools to assist researchers and practitioners with the modeling, analysis, integration, and evolution of existing and emerging business processes.
In addition, the use of User Requirements Notation for business process modelling is proposed by Weiss and Amyot REF .
15792852
Business Process Modeling with URN
{ "venue": "International Journal of E-Business Research", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
The use of Flash memories in portable embedded systems is ever increasing. This is because of the multi-level storage capability that makes them excellent candidates for high density memory devices. However, cost of writing or programming Flash memories is an order of magnitude higher than traditional memories. In this paper, we design an algorithm to reduce both average write energy and latency in Flash memories. We achieve this by reducing the number of expensive '01' and '10' bit-patterns during error control coding. We show that the algorithm does not change the error correction capability and moreover improves endurance. Simulations results on representative bit-stream traces show that the use of the proposed algorithm saves, on average, 33% of write energy and 31% of latency of Intel MLC NOR Flash memory, and improves the endurance by 24%.
Reference REF proposed the energy-aware error control coding to reduce the energy of write operation to NAND flash memory.
13896691
Energy-aware error control coding for Flash memories
{ "venue": "2009 46th ACM/IEEE Design Automation Conference", "journal": "2009 46th ACM/IEEE Design Automation Conference", "mag_field_of_study": [ "Computer Science" ] }
obile device users want to be able to access and manipulate information and services specific to their location, time, and environment. Context information gathered from sensors, networks, device status, user profiles, and other sources can enhance mobile applications' usability by letting them adapt to conditions that directly affect their operations. To achieve true context awareness, however, mobile systems must produce reliable information in the presence of uncertain, rapidly changing, partially true data from multiple heterogeneous sources. Mobile devices equipped with low-cost sensing elements can recognize some aspects of context. However, extracting relevant context information by fusing data from several sensors proves challenging because noise, faulty connections, drift, miscalibration, wear and tear, humidity, and other factors degrade data acquisition. Extracted contexts overlap, change with time, and yield only partially reliable approximations. Furthermore, mobile devices' dynamic environments require that we learn context descriptions from multidimensional data. Learning systems can't easily generalize beyond training data, however. Using even sufficiently reliable derived contexts directly to control mobile applications poses problems because users with different ideas of "context" might find application behavior irritating. To address these challenges, we present a uniform mobile terminal software framework that provides systematic methods for acquiring and processing useful context information from a user's surroundings and giving it to applications. The framework we present permits recognizing semantic contexts in real time in the presence of uncertain, noisy, and rapidly changing information and delivering contexts for the terminal applications in an event-based manner. Our application programming interface (API) for using semantic context information uses an expandable context ontology to define contexts that clients can use. We chose a blackboard-based approach 1 as the underlying communication paradigm between framework entities. Our approach focuses on mobile terminal capabilities rather than infrastructure. Accordingly, we designed the framework for the Symbian platform (www.symbian. com) to achieve true device mobility, high performance, and a broad user base. Four main functional entities comprise the con-
The context management framework (CMF) REF allows for semantic reasoning of the context in real time and even in the presence of noise, for uncertainty, and a rapid variation in the context.
206480358
Managing Context Information in Mobile Devices
{ "venue": "IEEE Pervasive Comput.", "journal": "IEEE Pervasive Comput.", "mag_field_of_study": [ "Computer Science" ] }
Abstract. We propose a simple neural network model to deal with the domain adaptation problem in object recognition. Our model incorporates the Maximum Mean Discrepancy (MMD) measure as a regularization in the supervised learning to reduce the distribution mismatch between the source and target domains in the latent space. From experiments, we demonstrate that the MMD regularization is an effective tool to provide good domain adaptation models on both SURF features and raw image pixels of a particular image data set. We also show that our proposed model, preceded by the denoising auto-encoder pretraining, achieves better performance than recent benchmark models on the same data sets. This work represents the first study of MMD measure in the context of neural networks.
REF (2014b) proposed an architecture that minimizes the maximum mean discrepancy between source and target distributions.
17674695
Domain Adaptive Neural Networks for Object Recognition
{ "venue": "ArXiv", "journal": "ArXiv", "mag_field_of_study": [ "Computer Science" ] }
Privacy-preserving data queries for wireless sensor networks (WSNs) have drawn much attention recently. This paper proposes a privacy-preserving MAX/MIN query processing approach based on random secure comparator selection in two-tiered sensor network, which is denoted by RSCS-PMQ. The secret comparison model is built on the basis of the secure comparator which is defined by 0-1 encoding and HMAC. And the minimal set of highest secure comparators generating algorithm MaxRSC is proposed, which is the key to realize RSCS-PMQ. In the data collection procedures, the sensor node randomly selects a generated secure comparator of the maximum data into ciphertext which is submitted to the nearby master node. In the query processing procedures, the master node utilizes the MaxRSC algorithm to determine the corresponding minimal set of candidate ciphertexts containing the query results and returns it to the base station. And the base station obtains the plaintext query result through decryption. The theoretical analysis and experimental result indicate that RSCS-PMQ can preserve the privacy of sensor data and query result from master nodes even if they are compromised, and it has a better performance on the network communication cost than the existing approaches.
Based on EMQP, a random secure comparator selection optimization is introduced to achieve a more efficient privacy-preserving MAX/MIN query (RSCS-PMQ) REF .
26882908
Random Secure Comparator Selection Based Privacy-Preserving MAX/MIN Query Processing in Two-Tiered Sensor Networks
{ "venue": "J. Sensors", "journal": "J. Sensors", "mag_field_of_study": [ "Computer Science" ] }
Abstract-The new flexgrid technology, in opposition to the fixed grid one traditionally used in wavelength switched optical networks (WSON), allows allocating the spectral bandwidth needed to convey heterogeneous client demand bitrates in a flexible manner so that the optical spectrum can be managed much more efficiently. In this paper we propose a new recovery scheme, called single-path provisioning multi-path recovery (SPP-MPR), specifically designed for flexgrid-based optical networks. It provisions single-paths to serve the bitrate requested by client demands and combines protection and restoration schemes to jointly recover, in part or totally, that bitrate in case of failure. We define the bitrate squeezed recovery optimization (BRASERO) problem to maximize the bitrate which is recovered in case of failure of any single fiber link. A mixed integer linear programming (MILP) formulation is provided. Exhaustive numerical experiments carried out over two network topologies and realistic traffic scenarios show that the efficiency of the proposed SPP-MPR scheme approaches that of restoration mechanisms while providing recovery times as short as protection schemes. Index Terms-Flexgrid optical networks, Single-path Provisioning Multi-path Recovery, Bitrate squeezing.
The single path provisioning multipath recovery (SPP-MPR) with bitrate are presented in REF .
16067535
Single-path provisioning with multi-path recovery in flexgrid optical networks
{ "venue": "2012 IV International Congress on Ultra Modern Telecommunications and Control Systems", "journal": "2012 IV International Congress on Ultra Modern Telecommunications and Control Systems", "mag_field_of_study": [ "Computer Science" ] }
Abstract -This paper presents an adaptive MAC (AMAC) protocol for supporting MAC layer adaptation in cognitive radio networks. MAC protocol adaptation is motivated by the flexibility of emerging software-defined radios which make it feasible to dynamically adjust radio protocols and parameters. Dynamic changes to the MAC layer may be useful in wireless networking scenarios such as tactical or vehicular communications where the radio node density and service requirements can vary widely over time. A specific control framework for the proposed AMAC is described based on the "CogNet" protocol stack which uses a "global control plane (GCP)" to distribute control information between nearby radios. A proof-of-concept AMAC prototype which switches between CSMA and TDMA is implemented using GNU radio platforms on the ORBIT radio grid testbed. Experimental results are given for both UDP and TCP with dynamic traffic variations. The results show that adaptive MAC can be implemented with reasonable control protocol overhead and latency, and that the adaptive network achieves improved performance relative to a conventional static system. I.
Huang et al. REF develops an adaptive MAC protocol which can select between multiple MACs.
11806450
MAC Protocol Adaptation in Cognitive Radio Networks: An Experimental Study
{ "venue": "2009 Proceedings of 18th International Conference on Computer Communications and Networks", "journal": "2009 Proceedings of 18th International Conference on Computer Communications and Networks", "mag_field_of_study": [ "Computer Science" ] }
Here we explore mining data on gene expression from the biomedical literature and present Gene Expression Text Miner (GETM), a tool for extraction of information about the expression of genes and their anatomical locations from text. Provided with recognized gene mentions, GETM identifies mentions of anatomical locations and cell lines, and extracts text passages where authors discuss the expression of a particular gene in specific anatomical locations or cell lines. This enables the automatic construction of expression profiles for both genes and anatomical locations. Evaluated against a manually extended version of the BioNLP '09 corpus, GETM achieved precision and recall levels of 58.8% and 23.8%, respectively. Application of GETM to MEDLINE and PubMed Central yielded over 700,000 gene expression mentions. This data set may be queried through a web interface, and should prove useful not only for researchers who are interested in the developmental regulation of specific genes of interest, but also for database curators aiming to create structured repositories of gene expression information. The compiled tool, its source code, the manually annotated evaluation corpus and a search query interface to the data set extracted from MEDLINE and PubMed Central is available at http://getmproject.sourceforge.net/.
In a study by Gerner et al. REF , gene-expression information and anatomical locations were extracted by applying a rule-based gene expression text miner to approximately 7,000 PubMed Central articles.
3872223
An Exploration of Mining Gene Expression Mentions and Their Anatomical Locations from Biomedical Text
{ "venue": "Proceedings of the 2010 Workshop on Biomedical Natural Language Processing", "journal": null, "mag_field_of_study": [ "Computer Science" ] }
This paper proposes a new design of non-orthogonal multiple access (NOMA) under secrecy considerations. We focus on a NOMA system, where a transmitter sends confidential messages to multiple users in the presence of an external eavesdropper. The optimal designs of decoding order, transmission rates, and power allocated to each user are investigated. Considering the practical passive eavesdropping scenario where the instantaneous channel state of the eavesdropper is unknown, we adopt the secrecy outage probability as the secrecy metric. We first consider the problem of minimizing the transmit power subject to the secrecy outage and quality of service constraints, and derive the closed-form solution to this problem. We then explore the problem of maximizing the minimum confidential information rate among users subject to the secrecy outage and transmit power constraints, and provide an iterative algorithm to solve this problem. We find that the secrecy outage constraint in the studied problems does not change the optimal decoding order for NOMA, and one should increase the power allocated to the user whose channel is relatively bad when the secrecy constraint becomes more stringent. Finally, we show the advantage of NOMA over orthogonal multiple access in the studied problems both analytically and numerically. Index Terms-Physical layer security, non-orthogonal multiple access, secrecy outage probability, power allocation.
The optimal designs of decoding order, transmission rates, and power allocated to each user for secure downlink NOMA were investigated in REF .
3719923
On the Design of Secure Non-Orthogonal Multiple Access Systems
{ "venue": "IEEE Journal on Selected Areas in Communications", "journal": "IEEE Journal on Selected Areas in Communications", "mag_field_of_study": [ "Mathematics", "Computer Science" ] }