aid
stringlengths
9
15
mid
stringlengths
7
10
abstract
stringlengths
78
2.56k
related_work
stringlengths
92
1.77k
ref_abstract
dict
1901.06268
2908914784
Biological data are extremely diverse, complex but also quite sparse. The recent developments in deep learning methods are offering new possibilities for the analysis of complex data. However, it is easy to be get a deep learning model that seems to have good results but is in fact either overfitting the training data or the validation data. In particular, the fact to overfit the validation data, called "information leak", is almost never treated in papers proposing deep learning models to predict protein-protein interactions (PPI). In this work, we compare two carefully designed deep learning models and show pitfalls to avoid while predicting PPIs through machine learning methods. Our best model predicts accurately more than 78 of human PPI, in very strict conditions both for training and testing. The methodology we propose here allow us to have strong confidences about the ability of a model to scale up on larger datasets. This would allow sharper models when larger datasets would be available, rather than current models prone to information leaks. Our solid methodological foundations shall be applicable to more organisms and whole proteome networks predictions.
in @cite_9 are using stack auto-encoders to extract features from protein sequences. The classification predicting protein-protein interaction is then done by directly linking the output of the last auto-encoder to a softmax classifier. For their inputs, there are converting the sequences into fixed-size Boolean vectors, one per sequence, encoding the presence or absence of 3-grams of amino acids, , each possible combination of 3 amino acids. They trained their model doing a 5-fold or 10-fold cross validation, depending their datasets.
{ "cite_N": [ "@cite_9" ], "mid": [ "2618265628" ], "abstract": [ "Abstract Background Protein-protein interactions (PPIs) are critical for many biological processes. It is therefore important to develop accurate high-throughput methods for identifying PPI to better understand protein function, disease occurrence, and therapy design. Though various computational methods for predicting PPI have been developed, their robustness for prediction with external datasets is unknown. Deep-learning algorithms have achieved successful results in diverse areas, but their effectiveness for PPI prediction has not been tested. Results We used a stacked autoencoder, a type of deep-learning algorithm, to study the sequence-based PPI prediction. The best model achieved an average accuracy of 97.19 with 10-fold cross-validation. The prediction accuracies for various external datasets ranged from 87.99 to 99.21 , which are superior to those achieved with previous methods. Conclusions To our knowledge, this research is the first to apply a deep-learning algorithm to sequence-based PPI prediction, and the results demonstrate its potential in this field." ] }
1901.06268
2908914784
Biological data are extremely diverse, complex but also quite sparse. The recent developments in deep learning methods are offering new possibilities for the analysis of complex data. However, it is easy to be get a deep learning model that seems to have good results but is in fact either overfitting the training data or the validation data. In particular, the fact to overfit the validation data, called "information leak", is almost never treated in papers proposing deep learning models to predict protein-protein interactions (PPI). In this work, we compare two carefully designed deep learning models and show pitfalls to avoid while predicting PPIs through machine learning methods. Our best model predicts accurately more than 78 of human PPI, in very strict conditions both for training and testing. The methodology we propose here allow us to have strong confidences about the ability of a model to scale up on larger datasets. This would allow sharper models when larger datasets would be available, rather than current models prone to information leaks. Our solid methodological foundations shall be applicable to more organisms and whole proteome networks predictions.
proposed in @cite_0 a plain fully connected neural network, similar to our first model in this paper but significantly bigger, with layers containing 512, 256 and 128 units for what should be feature extraction, and 128 units for the head of their network. However, they are not giving as input to the network protein sequences but a list of features, such as sequence-order descriptors and composition-transition-distribution descriptors, that authors extracted themselves. They used an hold-out validation set together with a test set but then switched for a 5-fold cross validation when comparing their model with others on some other datasets.
{ "cite_N": [ "@cite_0" ], "mid": [ "2616246685" ], "abstract": [ "The complex language of eukaryotic gene expression remains incompletely understood. Despite the importance suggested by many proteins variants statistically associated with human disease, nearly all such variants have unknown mechanisms, for example, protein–protein interactions (PPIs). In this study, we address this challenge using a recent machine learning advance-deep neural networks (DNNs). We aim at improving the performance of PPIs prediction and propose a method called DeepPPI (Deep neural networks for Protein–Protein Interactions prediction), which employs deep neural networks to learn effectively the representations of proteins from common protein descriptors. The experimental results indicate that DeepPPI achieves superior performance on the test data set with an Accuracy of 92.50 , Precision of 94.38 , Recall of 90.56 , Specificity of 94.49 , Matthews Correlation Coefficient of 85.08 and Area Under the Curve of 97.43 , respectively. Extensive experiments show that DeepPPI can learn useful feat..." ] }
1901.06268
2908914784
Biological data are extremely diverse, complex but also quite sparse. The recent developments in deep learning methods are offering new possibilities for the analysis of complex data. However, it is easy to be get a deep learning model that seems to have good results but is in fact either overfitting the training data or the validation data. In particular, the fact to overfit the validation data, called "information leak", is almost never treated in papers proposing deep learning models to predict protein-protein interactions (PPI). In this work, we compare two carefully designed deep learning models and show pitfalls to avoid while predicting PPIs through machine learning methods. Our best model predicts accurately more than 78 of human PPI, in very strict conditions both for training and testing. The methodology we propose here allow us to have strong confidences about the ability of a model to scale up on larger datasets. This would allow sharper models when larger datasets would be available, rather than current models prone to information leaks. Our solid methodological foundations shall be applicable to more organisms and whole proteome networks predictions.
use in @cite_17 a Deep Polynomial Network on features extracted by hand, like amino acid mutation rates or hydrophobic properties of proteins, to make their classification. Thus, they do not use the chain of amino acid residues as an input. They based the learning process on a 5-fold cross validation without test sets.
{ "cite_N": [ "@cite_17" ], "mid": [ "2808071711" ], "abstract": [ "Predicting the protein–protein interactions (PPIs) has played an important role in many applications. Hence, a novel computational method for PPIs prediction is highly desirable. PPIs endow with protein amino acid mutation rate and two physicochemical properties of protein (e.g., hydrophobicity and hydrophilicity). Deep polynomial network (DPN) is well-suited to integrate these modalities since it can represent any function on a finite sample dataset via the supervised deep learning algorithm. We propose a multimodal DPN (MDPN) algorithm to effectively integrate these modalities to enhance prediction performance. MDPN consists of a two-stage DPN, the first stage feeds multiple protein features into DPN encoding to obtain high-level feature representation while the second stage fuses and learns features by cascading three types of high-level features in the DPN encoding. We employ a regularized extreme learning machine to predict PPIs. The proposed method is tested on the public dataset of H. pylori, Human, and Yeast and achieves average accuracies of 97.87 , 99.90 , and 98.11 , respectively. The proposed method also achieves good accuracies on other datasets. Furthermore, we test our method on three kinds of PPI networks and obtain superior prediction results." ] }
1901.06268
2908914784
Biological data are extremely diverse, complex but also quite sparse. The recent developments in deep learning methods are offering new possibilities for the analysis of complex data. However, it is easy to be get a deep learning model that seems to have good results but is in fact either overfitting the training data or the validation data. In particular, the fact to overfit the validation data, called "information leak", is almost never treated in papers proposing deep learning models to predict protein-protein interactions (PPI). In this work, we compare two carefully designed deep learning models and show pitfalls to avoid while predicting PPIs through machine learning methods. Our best model predicts accurately more than 78 of human PPI, in very strict conditions both for training and testing. The methodology we propose here allow us to have strong confidences about the ability of a model to scale up on larger datasets. This would allow sharper models when larger datasets would be available, rather than current models prone to information leaks. Our solid methodological foundations shall be applicable to more organisms and whole proteome networks predictions.
@cite_24 , present a model composed of an embedding layer, three convolutions and a LSTM layer for feature extractions of protein sequences, before concatenating LSTM output of both proteins and performing classification with a fully connected layer linked to a sigmoid classifier. The architecture of our recurrent model is then near to their model, modulo the embedding and hyper-parameters. It is important to notice they do not mention to apply any regulation method to train their network. Their inputs are also sequence-based and they apply a 5-fold cross validation during training. with a hold-out test set. Interestingly, they also pad to zero their inputs, so they must mask these zeros into the embedding layer to make it learn something (otherwise this layer will interpret zeros as real data and their large number would prevent this layer to learn any good representation of the input). They use convolution layers after the embedding, and like for this paper, the use Keras as an API front-end to program their model. However the current implementation of convolution layers in Keras does not accept zero-masked data, and the authors do not write in their paper how did they manage to get around this technical issue.
{ "cite_N": [ "@cite_24" ], "mid": [ "2885583144" ], "abstract": [ "Machine learning based predictions of protein–protein interactions (PPIs) could provide valuable insights into protein functions, disease occurrence, and therapy design on a large scale. The intensive feature engineering in most of these methods makes the prediction task more tedious and trivial. The emerging deep learning technology enabling automatic feature engineering is gaining great success in various fields. However, the over-fitting and generalization of its models are not yet well investigated in most scenarios. Here, we present a deep neural network framework (DNN-PPI) for predicting PPIs using features learned automatically only from protein primary sequences. Within the framework, the sequences of two interacting proteins are sequentially fed into the encoding, embedding, convolution neural network (CNN), and long short-term memory (LSTM) neural network layers. Then, a concatenated vector of the two outputs from the previous layer is wired as the input of the fully connected neural network. Finally, the Adam optimizer is applied to learn the network weights in a back-propagation fashion. The different types of features, including semantic associations between amino acids, position-related sequence segments (motif), and their long- and short-term dependencies, are captured in the embedding, CNN and LSTM layers, respectively. When the model was trained on Pan’s human PPI dataset, it achieved a prediction accuracy of 98.78 at the Matthew’s correlation coefficient (MCC) of 97.57 . The prediction accuracies for six external datasets ranged from 92.80 to 97.89 , making them superior to those achieved with previous methods. When performed on Escherichia coli, Drosophila, and Caenorhabditis elegans datasets, DNN-PPI obtained prediction accuracies of 95.949 , 98.389 , and 98.669 , respectively. The performances in cross-species testing among the four species above coincided in their evolutionary distances. However, when testing Mus Musculus using the models from those species, they all obtained prediction accuracies of over 92.43 , which is difficult to achieve and worthy of note for further study. These results suggest that DNN-PPI has remarkable generalization and is a promising tool for identifying protein interactions." ] }
1901.06268
2908914784
Biological data are extremely diverse, complex but also quite sparse. The recent developments in deep learning methods are offering new possibilities for the analysis of complex data. However, it is easy to be get a deep learning model that seems to have good results but is in fact either overfitting the training data or the validation data. In particular, the fact to overfit the validation data, called "information leak", is almost never treated in papers proposing deep learning models to predict protein-protein interactions (PPI). In this work, we compare two carefully designed deep learning models and show pitfalls to avoid while predicting PPIs through machine learning methods. Our best model predicts accurately more than 78 of human PPI, in very strict conditions both for training and testing. The methodology we propose here allow us to have strong confidences about the ability of a model to scale up on larger datasets. This would allow sharper models when larger datasets would be available, rather than current models prone to information leaks. Our solid methodological foundations shall be applicable to more organisms and whole proteome networks predictions.
Finally, @cite_10 present a fully connected model regulated by dropouts. Like @cite_0 , they use composition-transition-distribution descriptors as features. They apply a 5-fold cross validation and have no separated test sets.
{ "cite_N": [ "@cite_0", "@cite_10" ], "mid": [ "2616246685", "2804331675" ], "abstract": [ "The complex language of eukaryotic gene expression remains incompletely understood. Despite the importance suggested by many proteins variants statistically associated with human disease, nearly all such variants have unknown mechanisms, for example, protein–protein interactions (PPIs). In this study, we address this challenge using a recent machine learning advance-deep neural networks (DNNs). We aim at improving the performance of PPIs prediction and propose a method called DeepPPI (Deep neural networks for Protein–Protein Interactions prediction), which employs deep neural networks to learn effectively the representations of proteins from common protein descriptors. The experimental results indicate that DeepPPI achieves superior performance on the test data set with an Accuracy of 92.50 , Precision of 94.38 , Recall of 90.56 , Specificity of 94.49 , Matthews Correlation Coefficient of 85.08 and Area Under the Curve of 97.43 , respectively. Extensive experiments show that DeepPPI can learn useful feat...", "Abstract Protein–protein interactions (PPIs) are of vital importance to most biological processes. Plenty of PPIs have been identified by wet-lab experiments in the past decades, but there are still abundant uncovered PPIs. Furthermore, wet-lab experiments are expensive and limited by the adopted experimental protocols. Although various computational models have been proposed to automatically predict PPIs and provided reliable interactions for experimental verification, the problem is still far from being solved. Novel and competent models are still anticipated. In this study, a neural network based approach called EnsDNN (Ensemble Deep Neural Networks) is proposed to predict PPIs based on different representations of amino acid sequences. Particularly, EnsDNN separately uses auto covariance descriptor, local descriptor, and multi-scale continuous and discontinuous local descriptor, to represent and explore the pattern of interactions between sequentially distant and spatially close amino acid residues. It then trains deep neural networks (DNNs) with different configurations based on each descriptor. Next, EnsDNN integrates these DNNs into an ensemble predictor to leverage complimentary information of these descriptors and of DNNs, and to predict potential PPIs. EnsDNN achieves superior performance with accuracy of 95.29 , sensitivity of 95.12 , and precision of 95.45 on predicting PPIs of Saccharomyces cerevisiae. Results on other five independent PPI datasets also demonstrate that EnsDNN gets better prediction performance than other related comparing methods." ] }
1901.06268
2908914784
Biological data are extremely diverse, complex but also quite sparse. The recent developments in deep learning methods are offering new possibilities for the analysis of complex data. However, it is easy to be get a deep learning model that seems to have good results but is in fact either overfitting the training data or the validation data. In particular, the fact to overfit the validation data, called "information leak", is almost never treated in papers proposing deep learning models to predict protein-protein interactions (PPI). In this work, we compare two carefully designed deep learning models and show pitfalls to avoid while predicting PPIs through machine learning methods. Our best model predicts accurately more than 78 of human PPI, in very strict conditions both for training and testing. The methodology we propose here allow us to have strong confidences about the ability of a model to scale up on larger datasets. This would allow sharper models when larger datasets would be available, rather than current models prone to information leaks. Our solid methodological foundations shall be applicable to more organisms and whole proteome networks predictions.
We can also mentioned the work of @cite_8 , proposing a multi-layered LSTM model to predict interface residue pair interactions, thus at a finer level level than prediction interaction between two proteins. This is a direction towards which we would like to extend our results.
{ "cite_N": [ "@cite_8" ], "mid": [ "2614995786" ], "abstract": [ "Motivation: Proteins usually fulfill their biological functions by interacting with other proteins. Although some methods have been developed to predict the binding sites of a monomer protein, these are not sufficient for prediction of the interaction between two monomer proteins. The correct prediction of interface residue pairs from two monomer proteins is still an open question and has great significance for practical experimental applications in the life sciences. We hope to build a method for the prediction of interface residue pairs that is suitable for those applications. Results: Here, we developed a novel deep network architecture called the multi-layered Long-Short Term Memory networks (LSTMs) approach for the prediction of protein interface residue pairs. Firstly, we created three new descriptions and used other six worked characterizations to describe an amino acid, then we employed these features to discriminate between interface residue pairs and non-interface residue pairs. Secondly, we used two thresholds to select residue pairs that are more likely to be interface residue pairs. Furthermore, this step increases the proportion of interface residue pairs and reduces the influence of imbalanced data. Thirdly, we built deep network architectures based on Long-Short Term Memory networks algorithm to organize and refine the prediction of interface residue pairs by employing features mentioned above. We trained the deep networks on dimers in the unbound state in the international Protein-protein Docking Benchmark version 3.0. The updated data sets in the versions 4.0 and 5.0 were used as the validation set and test set respectively. For our best model, the accuracy rate was over 62 when we chose the top 0.2 pairs of every dimer in the test set as predictions, which will be very helpful for the understanding of protein-protein interaction mechanisms and for guidance in biological experiments." ] }
1901.06263
2911010497
Considering the advances in building monitoring and control through networks of interconnected devices, effective handling of the associated rich data streams is becoming an important challenge. In many situations the application of conventional system identification or approximate grey-box models, partly theoretic and partly data-driven, is either unfeasible or unsuitable. The paper discusses and illustrates an application of black-box modelling achieved using data mining techniques with the purpose of smart building ventilation subsystem control. We present the implementation and evaluation of a data mining methodology on collected data over one year of operation. The case study is carried out on four air handling units of a modern campus building for preliminary decision support for facility managers. The data processing and learning framework is based on two steps: raw data streams are compressed using the Symbolic Aggregate Approximation method, followed by the resulting segments being input into a Support Vector Machine algorithm. The results are useful for deriving the behaviour of each equipment in various modi of operation and can be built upon for fault detection or energy efficiency applications. Challenges related to online operation within a commercial Building Management System are also discussed as the approach shows promise for deployment.
A paper focused on energy-efficiency improvements leveraging available building-level data for data mining is @cite_7 . The authors list the main predictive tasks in which data mining of large quantities of measurements and contextual information is relevant. These cover: building energy demand prediction, building occupancy and occupant behaviour and fault detection and diagnosis (FDD) for building systems. @cite_17 and @cite_5 further argument through broader studies the relevance of data-driven approaches in timely building energy efficiency applications.
{ "cite_N": [ "@cite_5", "@cite_7", "@cite_17" ], "mid": [ "2801774872", "2204655733", "2754029504" ], "abstract": [ "Abstract Due to its significant contribution to global energy usage and the associated greenhouse gas emissions, existing building stock's energy efficiency must improve. Predictive building control promises to contribute to that by increasing the efficiency of building operations. Predictive control complements other means to increase performance such as refurbishments as well as modernizations of systems. This survey reviews recent works and contextualizes these with the current state of the art of interrelated topics in data handling, building automation, distributed control, and semantics. The comprehensive overview leads to seven research questions guiding future research directions.", "Abstract The rapidly growing and gigantic body of stored data in the building field, coupled with the need for data analysis, has generated an urgent need for powerful tools that can extract hidden but useful knowledge of building performance improvement from large data sets. As an emerging subfield of computer science, data mining technologies suit this need well and have been proposed for relevant knowledge discovery in the past several years. Aimed to highlight recent advances, this paper provides an overview of the studies undertaking the two main data mining tasks (i.e. predictive tasks and descriptive tasks) in the building field. Based on the overview, major challenges and future research trends are also discussed.", "Abstract Energy is the lifeblood of modern societies. In the past decades, the world's energy consumption and associated CO2 emissions increased rapidly due to the increases in population and comfort demands of people. Building energy consumption prediction is essential for energy planning, management, and conservation. Data-driven models provide a practical approach to energy consumption prediction. This paper offers a review of the studies that developed data-driven building energy consumption prediction models, with a particular focus on reviewing the scopes of prediction, the data properties and the data preprocessing methods used, the machine learning algorithms utilized for prediction, and the performance measures used for evaluation. Based on this review, existing research gaps are identified and future research directions in the area of data-driven building energy consumption prediction are highlighted." ] }
1901.06263
2911010497
Considering the advances in building monitoring and control through networks of interconnected devices, effective handling of the associated rich data streams is becoming an important challenge. In many situations the application of conventional system identification or approximate grey-box models, partly theoretic and partly data-driven, is either unfeasible or unsuitable. The paper discusses and illustrates an application of black-box modelling achieved using data mining techniques with the purpose of smart building ventilation subsystem control. We present the implementation and evaluation of a data mining methodology on collected data over one year of operation. The case study is carried out on four air handling units of a modern campus building for preliminary decision support for facility managers. The data processing and learning framework is based on two steps: raw data streams are compressed using the Symbolic Aggregate Approximation method, followed by the resulting segments being input into a Support Vector Machine algorithm. The results are useful for deriving the behaviour of each equipment in various modi of operation and can be built upon for fault detection or energy efficiency applications. Challenges related to online operation within a commercial Building Management System are also discussed as the approach shows promise for deployment.
Deployment of distributed sensor networks for finer grained spatio-temporal monitoring of indoor conditions is performed by @cite_13 . The authors argue that the statistical modelling of the indoor environment as non-parametric Gaussian processes can lead to reliable information that is fed back to the building management system in order to improve the HVAC control. Wireless sensors can be implemented with limited costs as compared to conventional wired sensors and the monitoring architecture can be adjusted dynamically in order to best capture field level information. In @cite_24 a thermal comfort application using collected HVAC IoT data is presented. Building-level benchmarking data sets @cite_22 are highly important to assess algorithm performance and produce reproducible outcomes. The authors present a large database of one year data from 507 non-residential building energy meters, mainly from university campuses. A model based predictive control for maintaining thermal comfort in buildings is applied in @cite_18 . The optimal comfort index is achieved by a cost function depending on both occupant comfort and energy cost.
{ "cite_N": [ "@cite_24", "@cite_18", "@cite_13", "@cite_22" ], "mid": [ "2804138105", "2594724848", "2743217645", "2755807005" ], "abstract": [ "This paper presents an Internet of Things (IoT) platform for a smart building which provides human care services for occupants. The individual health profiles of the occupants are acquired by the IoT-based smart building, which uses the accumulated knowledge of the occupants to provide better services. To ensure the thermal comfort of the occupants inside the building, we propose a dynamic thermal model of occupants. This model is based on the heat balance equation of human body and thermal characteristics of the occupants. We implement this model in two smart building models with heaters controlled by a temperature and thermal comfort index using MATLAB Simulink®. The simulation results show that the thermal comfort-based control is more effective to maintaining occupants’ thermal satisfaction and is therefore recommended for use providing human care services using IoT platforms in smart buildings.", "The goal of maintaining users’ thermal comfort conditions in indoor environments may require complex regulation procedures and a proper energy management. This problem is being widely analyzed, since it has a direct effect on users’ productivity. This paper presents an economic model-based predictive control (MPC) whose main strength is the use of the day-ahead price (DAP) in order to predict the energy consumption associated with the heating, ventilation and air conditioning (HVAC). In this way, the control system is able to maintain a high thermal comfort level by optimizing the use of the HVAC system and to reduce, at the same time, the energy consumption associated with it, as much as possible. Later, the performance of the proposed control system is tested through simulations with a non-linear model of a bioclimatic building room. Several simulation scenarios are considered as a test-bed. From the obtained results, it is possible to conclude that the control system has a good behavior in several situations, i.e., it can reach the users’ thermal comfort for the analyzed situations, whereas the HVAC use is adjusted through the DAP; therefore, the energy savings associated with the HVAC is increased.", "The paper addresses the problem of efficiently monitoring environmental fields in a smart building by the use of a network of wireless noisy sensors that take discretely-predefined measurements at their locations through time. It is proposed that the indoor environmental fields are statistically modeled by spatio-temporal non-parametric Gaussian processes. The proposed models are able to effectively predict and estimate the indoor climate parameters at any time and at any locations of interest, which can be utilized to create timely maps of indoor environments. More importantly, the monitoring results are practically crucial for building management systems to efficiently control energy consumption and maximally improve human comfort in the building. The proposed approach was implemented in a real tested space in a university building, where the obtained results are highly promising.", "As of 2015, there are over 60 million smart meters installed in the United States; these meters are at the forefront of big data analytics in the building industry. However, only a few public data sources of hourly non-residential meter data exist for the purpose of testing algorithms. This paper describes the collection, cleaning, and compilation of several such data sets found publicly on-line, in addition to several collected by the authors. There are 507 whole building electrical meters in this collection, and a majority are from buildings on university campuses. This group serves as a primary repository of open, non-residential data sources that can be built upon by other researchers. An overview of the data sources, subset selection criteria, and details of access to the repository are included. Future uses include the application of new, proposed prediction and classification models to compare performance to previously generated techniques." ] }
1901.06263
2911010497
Considering the advances in building monitoring and control through networks of interconnected devices, effective handling of the associated rich data streams is becoming an important challenge. In many situations the application of conventional system identification or approximate grey-box models, partly theoretic and partly data-driven, is either unfeasible or unsuitable. The paper discusses and illustrates an application of black-box modelling achieved using data mining techniques with the purpose of smart building ventilation subsystem control. We present the implementation and evaluation of a data mining methodology on collected data over one year of operation. The case study is carried out on four air handling units of a modern campus building for preliminary decision support for facility managers. The data processing and learning framework is based on two steps: raw data streams are compressed using the Symbolic Aggregate Approximation method, followed by the resulting segments being input into a Support Vector Machine algorithm. The results are useful for deriving the behaviour of each equipment in various modi of operation and can be built upon for fault detection or energy efficiency applications. Challenges related to online operation within a commercial Building Management System are also discussed as the approach shows promise for deployment.
As compared to traditional model-based control (MBC), data-driven control (DDC) represents an emerging field of study which accounts for the need to manage the data deluge produced by dense temporal and spatial monitoring of various systems. A broad survey on the specific nature of DDC and comparison to MBC in various control structures is discussed by @cite_0 . Within this concept, the steps of data mining and classification for prediction and assessment are seen mainly as acting as a higher level supervisor to field level control loops in the case of tuning control parameters, set-points and providing contextual information which contributes to improved robustness. One good application example as reference for DDC with random forests of regression trees @cite_16 . In this case multi-output regression trees are used to represent the system dynamics over the prediction horizon and the control problem is solved in real-time in closed-loop with the physical plant.
{ "cite_N": [ "@cite_0", "@cite_16" ], "mid": [ "2040871222", "2790404719" ], "abstract": [ "This paper is a brief survey on the existing problems and challenges inherent in model-based control (MBC) theory, and some important issues in the analysis and design of data-driven control (DDC) methods are here reviewed and addressed. The necessity of data-driven control is discussed from the aspects of the history, the present, and the future of control theories and applications. The state of the art of the existing DDC methods and applications are presented with appropriate classifications and insights. The relationship between the MBC method and the DDC method, the differences among different DDC methods, and relevant topics in data-driven optimization and modeling are also highlighted. Finally, the perspective of DDC and associated research topics are briefly explored and discussed.", "Model Predictive Control (MPC) plays an important role in optimizing operations of complex cyber-physical systems because of its ability to forecast system’s behavior and act under system level constraints. However, MPC requires reasonably accurate underlying models of the system. In many applications, such as building control for energy management, Demand Response, or peak power reduction, obtaining a high-fidelity physics-based model is cost and time prohibitive, thus limiting the widespread adoption of MPC. To this end, we propose a data-driven control algorithm for MPC that relies only on the historical data. We use multi-output regression trees to represent the system’s dynamics over multiple future time steps and formulate a finite receding horizon control problem that can be solved in real-time in closed-loop with the physical plant. We apply this algorithm to peak power reduction in buildings to optimally trade-off peak power reduction against thermal comfort without having to learn white grey box models of the systems dynamics." ] }
1901.06263
2911010497
Considering the advances in building monitoring and control through networks of interconnected devices, effective handling of the associated rich data streams is becoming an important challenge. In many situations the application of conventional system identification or approximate grey-box models, partly theoretic and partly data-driven, is either unfeasible or unsuitable. The paper discusses and illustrates an application of black-box modelling achieved using data mining techniques with the purpose of smart building ventilation subsystem control. We present the implementation and evaluation of a data mining methodology on collected data over one year of operation. The case study is carried out on four air handling units of a modern campus building for preliminary decision support for facility managers. The data processing and learning framework is based on two steps: raw data streams are compressed using the Symbolic Aggregate Approximation method, followed by the resulting segments being input into a Support Vector Machine algorithm. The results are useful for deriving the behaviour of each equipment in various modi of operation and can be built upon for fault detection or energy efficiency applications. Challenges related to online operation within a commercial Building Management System are also discussed as the approach shows promise for deployment.
Big data analytics for smart city electricity consumption in presented in @cite_28 . The authors use computational intelligence algorithms to model the consumption of eight university buildings. The outcome consists of offline policies to optimise energy usage across the campus. In @cite_11 a different application is described using decision trees for occupancy estimation in office buildings. Occupancy modelling and estimation is a critical task in smart buildings as the occupancy level and its accurate forecasting directly impact the HVAC conditioning strategy of the building and avoiding wasteful control. Fault and anomaly detection with a rule-based system is described in @cite_25 . The main contributions relate to building automated anomaly detection rules with regard to energy efficiency. This is achieved by combining data mining on historical data with expert information about energy efficiency. @cite_20 illustrate the results of the BRIDGE diagnosis strategy on a dedicated building sensor test bed. By considering sensor faults as data deviations, FDD can accurately detect abnormal conditions. FDD for ventilation subsystems is also covered by @cite_23 by using a graph-based approach.
{ "cite_N": [ "@cite_28", "@cite_23", "@cite_25", "@cite_20", "@cite_11" ], "mid": [ "2790486829", "", "2294864664", "2788535427", "2475772748" ], "abstract": [ "New technologies such as sensor networks have been incorporated into the management of buildings for organizations and cities. Sensor networks have led to an exponential increase in the volume of data available in recent years, which can be used to extract consumption patterns for the purposes of energy and monetary savings. For this reason, new approaches and strategies are needed to analyze information in big data environments. This paper proposes a methodology to extract electric energy consumption patterns in big data time series, so that very valuable conclusions can be made for managers and governments. The methodology is based on the study of four clustering validity indices in their parallelized versions along with the application of a clustering technique. In particular, this work uses a voting system to choose an optimal number of clusters from the results of the indices, as well as the application of the distributed version of the k-means algorithm included in Apache Spark’s Machine Learning Library. The results, using electricity consumption for the years 2011–2017 for eight buildings of a public university, are presented and discussed. In addition, the performance of the proposed methodology is evaluated using synthetic big data, which cab represent thousands of buildings in a smart city. Finally, policies derived from the patterns discovered are proposed to optimize energy usage across the university campus.", "", "Automatic system to detect energy efficiency anomalies in smart buildings.Definition and testing of energy efficiency indicators to quantify energy savings.Knowledge extraction from data and HVAC experts through Data Mining techniques.In this study a full set of anomalous EE consumption patterns are detected.During test period more than 10 of day presented a kind of EE anomaly. The rapidly growing world energy use already has concerns over the exhaustion of energy resources and heavy environmental impacts. As a result of these concerns, a trend of green and smart cities has been increasing. To respond to this increasing trend of smart cities with buildings every time more complex, in this paper we have proposed a new method to solve energy inefficiencies detection problem in smart buildings. This solution is based on a rule-based system developed through data mining techniques and applying the knowledge of energy efficiency experts. A set of useful energy efficiency indicators is also proposed to detect anomalies. The data mining system is developed through the knowledge extracted by a full set of building sensors. So, the results of this process provide a set of rules that are used as a part of a decision support system for the optimisation of energy consumption and the detection of anomalies in smart buildings.", "Abstract This paper aims at designing a diagnosis tool that shall be used to support experts for detecting and localizing faults in a sensor grid of a building system. It is a tool-aided diagnosis with mathematical models and reasoning tools that determines whether a sensor is faulty or not. It is based on detection tests and logical diagnosis analysis for the first principle. At the beginning, a succinct state of art is provided for existing fault detection and diagnosis (FDD) methods. Then, the diagnosis algorithm is proposed: it deals with a BRIDGE approach of FDD for a building system focusing on sensor grids. Sensor faults are diagnosed thanks to detection tests and diagnosis first principle. In addition, this approach provides the possible fault modes including multiple sensor faults. Finally, a series of tests are performed in order to validate the approach. An application example shows the efficiency of the proposed technique: an office setting at Grenoble Institute of Technology.", "A general approach is proposed to determine the common sensors that shall be used to estimate and classify the approximate number of people (within a range) in a room. The range is dynamic and depends on the maximum occupancy met in a training data set for instance. Means to estimate occupancy include motion detection, power consumption, CO2 concentration sensors, microphone or door window positions. The proposed approach is inspired by machine learning. It starts by determining the most useful measurements in calculating information gains. Then, estimation algorithms are proposed: they rely on decision tree learning algorithms because these yield decision rules readable by humans, which correspond to nested if-then-else rules, where thresholds can be adjusted depending on the living areas considered. In addition, the decision tree depth is limited in order to simplify the analysis of the tree rules. Finally, an economic analysis is carried out to evaluate the cost and the most relevant sensor sets, with cost and accuracy comparison for the estimation of occupancy. C45 and random forest algorithms have been applied to an office setting, with average estimation error of 0.19–0.18. Over-fitting issues and best sensor sets are discussed." ] }
1901.06263
2911010497
Considering the advances in building monitoring and control through networks of interconnected devices, effective handling of the associated rich data streams is becoming an important challenge. In many situations the application of conventional system identification or approximate grey-box models, partly theoretic and partly data-driven, is either unfeasible or unsuitable. The paper discusses and illustrates an application of black-box modelling achieved using data mining techniques with the purpose of smart building ventilation subsystem control. We present the implementation and evaluation of a data mining methodology on collected data over one year of operation. The case study is carried out on four air handling units of a modern campus building for preliminary decision support for facility managers. The data processing and learning framework is based on two steps: raw data streams are compressed using the Symbolic Aggregate Approximation method, followed by the resulting segments being input into a Support Vector Machine algorithm. The results are useful for deriving the behaviour of each equipment in various modi of operation and can be built upon for fault detection or energy efficiency applications. Challenges related to online operation within a commercial Building Management System are also discussed as the approach shows promise for deployment.
@cite_2 describe in detail the explicit data modelling process for smart building evaluation. A case study is carried out for energy forecasting of a target building using techniques such a Bayesian Regularized Neural Networks and Random Forests. SVM are also considered but provide weaker results in this specific scenario. Finally in @cite_14 SVM is applied for a regression problem where instead of a class label the output of the algorithm consists of a numeric value.
{ "cite_N": [ "@cite_14", "@cite_2" ], "mid": [ "1558927261", "2364005411" ], "abstract": [ "As our society gains a better understanding of how humans have negatively impacted the environment, research related to reducing carbon emissions and overall energy consumption has become increasingly important. One of the simplest ways to reduce energy usage is by making current buildings less wasteful. By improving energy efficiency, this method of lowering our carbon footprint is particularly worthwhile because it reduces energy costs of operating the building, unlike many environmental initiatives that require large monetary investments. In order to improve the efficiency of the heating, ventilation, and air conditioning (HVAC) system of a Manhattan skyscraper, 345 Park Avenue, a predictive computer model was designed to forecast the amount of energy the building will consume. This model uses Support Vector Machine Regression (SVMR), a method that builds a regression based purely on historical data of the building, requiring no knowledge of its size, heating and cooling methods, or any other physical properties. SVMR employs time-delay coordinates as a representation of the past to create the feature vectors for SVM training. This pure dependence on historical data makes the model very easily applicable to different types of buildings with few model adjustments. The SVM regression model was built to predict a week of future energy usage based on past energy, temperature, and dew point temperature data.", "Abstract This work presents how to proceed during the processing of all available data coming from smart buildings to generate models that predict their energy consumption. For this, we propose a methodology that includes the application of different intelligent data analysis techniques and algorithms that have already been applied successfully in related scenarios, and the selection of the best one depending on the value of the selected metric used for the evaluation. This result depends on the specific characteristics of the target building and the available data. Among the techniques applied to a reference building, Bayesian Regularized Neural Networks and Random Forest are selected because they provide the most accurate predictive results." ] }
1901.06263
2911010497
Considering the advances in building monitoring and control through networks of interconnected devices, effective handling of the associated rich data streams is becoming an important challenge. In many situations the application of conventional system identification or approximate grey-box models, partly theoretic and partly data-driven, is either unfeasible or unsuitable. The paper discusses and illustrates an application of black-box modelling achieved using data mining techniques with the purpose of smart building ventilation subsystem control. We present the implementation and evaluation of a data mining methodology on collected data over one year of operation. The case study is carried out on four air handling units of a modern campus building for preliminary decision support for facility managers. The data processing and learning framework is based on two steps: raw data streams are compressed using the Symbolic Aggregate Approximation method, followed by the resulting segments being input into a Support Vector Machine algorithm. The results are useful for deriving the behaviour of each equipment in various modi of operation and can be built upon for fault detection or energy efficiency applications. Challenges related to online operation within a commercial Building Management System are also discussed as the approach shows promise for deployment.
The current paper also builds upon own previous work dedicated to decision support systems for renewable energy campus microgrids @cite_10 and carrying out Model Predictive Control (MPC) for building simulations @cite_3 . Earlier work has also included exploratory data analysis from a single building AHU without further analysis and implementation of learning models at a larger scale @cite_9 . In this context we have developed the contributions towards better understanding of collected data from smart buildings. Figure 1 summarises this section with regard to the role of data mining for DDC in this scenario. This generic approach is mapped onto our particular scenario as well. Each of the ventilation units implements local control loops which have to comply to setpoints given by the building operator according to occupancy schedules or seasonal adjustments. Without influencing the low-level control we look at input-output data to indirectly characterise the system behaviour and the end goal of improving the control loop parameters and setpoints through a learning framework.
{ "cite_N": [ "@cite_9", "@cite_10", "@cite_3" ], "mid": [ "2770442362", "2574708214", "2512691281" ], "abstract": [ "Modern, densely instrumented, smart buildings generate large amounts of raw data. This poses significant challenges from both the data management perspective as well as leveraging the associated information for enabling advanced energy management, fault detection and control strategies. Networks of intelligent sensors, controllers and actuators currently allow fine grained monitoring of the building state but shift the challenge to exploiting these large quantities of data in an efficient manner. We discuss methods for black-box modelling of input-output data stemming from buildings. Using exploratory analysis it is argued that data mining inspired approaches allow for fast and effective assessment of building state and associated predictions. These are illustrated using a case study on real data collected from commercial-grade air handling units of a research building. Conclusions point out to the feasibility of this approach as well as potential for data mining techniques in smart building control applications.", "This paper presents the development of a decision support system (DSS) for a low-voltage grid with renewable energy sources (photovoltaic panels and wind turbine) which aims at achieving energy balance in a pilot microgrid with less energy consumed from the network. The DSS is based on a procedural decision algorithm that is applied on a pilot microgrid, with energy produced from renewable energy sources, but it can be easily generalized for any microgrid. To underline the benefits of the developed DSS two case scenarios (a household and an office building with different energy consumptions) were analyzed. The results and throw added value of the paper is the description of an implemented microgrid, the development and testing of the decision support system on real measured data. Experimental results have demonstrated the validity of the approach in rule-based decision switching.", "The paper presents the system modeling, controller design and numerical simulation results for thermal energy management of a real office building. Focus is set on an efficient and unitary approach which leads from detailed civil engineering specifications of the building elements to compact and effective models which are suitable for control. A modular semi-automated approach in used in order to derive the discrete state-space representation of the system model. This combines the key thermal dynamics of the constructions with a modular list of thermal loads and losses, defined as external heat fluxes. A balanced trade-off is thus achieved between model accuracy and complexity through a compact and effective representation of the plant dynamics. The control strategy is based on a predictive controller which evolves an optimized system input vector, in a closed loop. Paths for occupant feedback integration into a single framework, by using human-in-the-loop models via disturbance channels are also discussed." ] }
1901.06261
2910933843
Application of neural networks to a vast variety of practical applications is transforming the way AI is applied in practice. Pre-trained neural network models available through APIs or capability to custom train pre-built neural network architectures with customer data has made the consumption of AI by developers much simpler and resulted in broad adoption of these complex AI models. While prebuilt network models exist for certain scenarios, to try and meet the constraints that are unique to each application, AI teams need to think about developing custom neural network architectures that can meet the tradeoff between accuracy and memory footprint to achieve the tight constraints of their unique use-cases. However, only a small proportion of data science teams have the skills and experience needed to create a neural network from scratch, and the demand far exceeds the supply. In this paper, we present NeuNetS : An automated Neural Network Synthesis engine for custom neural network design that is available as part of IBM's AI OpenScale's product. NeuNetS is available for both Text and Image domains and can build neural networks for specific tasks in a fraction of the time it takes today with human effort, and with accuracy similar to that of human-designed AI models.
Evolutionary algorithms and reinforcement learning are currently the two state-of-the-art techniques used by neural network architectures search algorithms. With Neural Architecture Search @cite_38 , demonstrated in an experiment over 28 days and with 800 GPUs that neural network architectures with performances close to state-of-the-art architectures can be found. In parallel or inspired by this work, others proposed to use reinforcement learning to detect sequential architectures @cite_13 , reduce the search space to repeating cells @cite_71 @cite_0 or apply function-preserving actions to accelerate the search @cite_25 .
{ "cite_N": [ "@cite_38", "@cite_0", "@cite_71", "@cite_13", "@cite_25" ], "mid": [ "2963374479", "2747469359", "2964081807", "2556833785", "2773706593" ], "abstract": [ "Neural networks are powerful and flexible models that work well for many difficult learning tasks in image, speech and natural language understanding. Despite their success, neural networks are still hard to design. In this paper, we use a recurrent network to generate the model descriptions of neural networks and train this RNN with reinforcement learning to maximize the expected accuracy of the generated architectures on a validation set. On the CIFAR-10 dataset, our method, starting from scratch, can design a novel network architecture that rivals the best human-invented architecture in terms of test set accuracy. Our CIFAR-10 model achieves a test error rate of 3.65, which is 0.09 percent better and 1.05x faster than the previous state-of-the-art model that used a similar architectural scheme. On the Penn Treebank dataset, our model can compose a novel recurrent cell that outperforms the widely-used LSTM cell, and other state-of-the-art baselines. Our cell achieves a test set perplexity of 62.4 on the Penn Treebank, which is 3.6 perplexity better than the previous state-of-the-art model. The cell can also be transferred to the character language modeling task on PTB and achieves a state-of-the-art perplexity of 1.214.", "Convolutional neural network provides an end-to-end solution to train many computer vision tasks and has gained great successes. However, the design of network architectures usually relies heavily on expert knowledge and is hand-crafted. In this paper, we provide a solution to automatically and efficiently design high performance network architectures. To reduce the search space of network design, we focus on constructing network blocks, which can be stacked to generate the whole network. Blocks are generated through an agent, which is trained with Q-learning to maximize the expected accuracy of the searching blocks on the learning task. Distributed asynchronous framework and early stop strategy are used to accelerate the training process. Our experimental results demonstrate that the network architectures designed by our approach perform competitively compared with hand-crafted state-of-the-art networks. We trained the Q-learning on CIFAR-100, and evaluated on CIFAR10 and ImageNet, the designed block structure achieved 3.60 error on CIFAR-10 and competitive result on ImageNet. The Q-learning process can be efficiently trained only on 32 GPUs in 3 days.", "Developing neural network image classification models often requires significant architecture engineering. In this paper, we study a method to learn the model architectures directly on the dataset of interest. As this approach is expensive when the dataset is large, we propose to search for an architectural building block on a small dataset and then transfer the block to a larger dataset. The key contribution of this work is the design of a new search space (which we call the \"NASNet search space\") which enables transferability. In our experiments, we search for the best convolutional layer (or \"cell\") on the CIFAR-10 dataset and then apply this cell to the ImageNet dataset by stacking together more copies of this cell, each with their own parameters to design a convolutional architecture, which we name a \"NASNet architecture\". We also introduce a new regularization technique called ScheduledDropPath that significantly improves generalization in the NASNet models. On CIFAR-10 itself, a NASNet found by our method achieves 2.4 error rate, which is state-of-the-art. Although the cell is not searched for directly on ImageNet, a NASNet constructed from the best cell achieves, among the published works, state-of-the-art accuracy of 82.7 top-1 and 96.2 top-5 on ImageNet. Our model is 1.2 better in top-1 accuracy than the best human-invented architectures while having 9 billion fewer FLOPS - a reduction of 28 in computational demand from the previous state-of-the-art model. When evaluated at different levels of computational cost, accuracies of NASNets exceed those of the state-of-the-art human-designed models. For instance, a small version of NASNet also achieves 74 top-1 accuracy, which is 3.1 better than equivalently-sized, state-of-the-art models for mobile platforms. Finally, the image features learned from image classification are generically useful and can be transferred to other computer vision problems. On the task of object detection, the learned features by NASNet used with the Faster-RCNN framework surpass state-of-the-art by 4.0 achieving 43.1 mAP on the COCO dataset.", "At present, designing convolutional neural network (CNN) architectures requires both human expertise and labor. New architectures are handcrafted by careful experimentation or modified from a handful of existing networks. We introduce MetaQNN, a meta-modeling algorithm based on reinforcement learning to automatically generate high-performing CNN architectures for a given learning task. The learning agent is trained to sequentially choose CNN layers using @math -learning with an @math -greedy exploration strategy and experience replay. The agent explores a large but finite space of possible architectures and iteratively discovers designs with improved performance on the learning task. On image classification benchmarks, the agent-designed networks (consisting of only standard convolution, pooling, and fully-connected layers) beat existing networks designed with the same layer types and are competitive against the state-of-the-art methods that use more complex layer types. We also outperform existing meta-modeling approaches for network design on image classification tasks.", "Techniques for automatically designing deep neural network architectures such as reinforcement learning based approaches have recently shown promising results. However, their success is based on vast computational resources (e.g. hundreds of GPUs), making them difficult to be widely used. A noticeable limitation is that they still design and train each network from scratch during the exploration of the architecture space, which is highly inefficient. In this paper, we propose a new framework toward efficient architecture search by exploring the architecture space based on the current network and reusing its weights. We employ a reinforcement learning agent as the meta-controller, whose action is to grow the network depth or layer width with function-preserving transformations. As such, the previously validated networks can be reused for further exploration, thus saves a large amount of computational cost. We apply our method to explore the architecture space of the plain convolutional neural networks (no skip-connections, branching etc.) on image benchmark datasets (CIFAR-10, SVHN) with restricted computational resources (5 GPUs). Our method can design highly competitive networks that outperform existing networks using the same design scheme. On CIFAR-10, our model without skip-connections achieves 4.23 test error rate, exceeding a vast majority of modern architectures and approaching DenseNet. Furthermore, by applying our method to explore the DenseNet architecture space, we are able to achieve more accurate networks with fewer parameters." ] }
1901.06261
2910933843
Application of neural networks to a vast variety of practical applications is transforming the way AI is applied in practice. Pre-trained neural network models available through APIs or capability to custom train pre-built neural network architectures with customer data has made the consumption of AI by developers much simpler and resulted in broad adoption of these complex AI models. While prebuilt network models exist for certain scenarios, to try and meet the constraints that are unique to each application, AI teams need to think about developing custom neural network architectures that can meet the tradeoff between accuracy and memory footprint to achieve the tight constraints of their unique use-cases. However, only a small proportion of data science teams have the skills and experience needed to create a neural network from scratch, and the demand far exceeds the supply. In this paper, we present NeuNetS : An automated Neural Network Synthesis engine for custom neural network design that is available as part of IBM's AI OpenScale's product. NeuNetS is available for both Text and Image domains and can build neural networks for specific tasks in a fraction of the time it takes today with human effort, and with accuracy similar to that of human-designed AI models.
Various techniques exist which try to shorten the training time. One idea is based on the idea of terminating unpromising training runs early. The partially observed learning curve is used directly to decide to terminate a run early @cite_73 or first extrapolated and then used @cite_82 @cite_61 @cite_21 . Other methods are able to sample different architectures and then predict its likely performance. Peephole @cite_28 predicts a network accuracy by only analyzing the network structure, however it works only on a fixed dataset test case. SMASH uses a hypernetwork to predict weights for an architecture without training and uses its validation performance as a proxy for its performance after training @cite_19 . Others reduce the search time by sharing or reusing model weights @cite_25 @cite_65 @cite_49 @cite_52 .
{ "cite_N": [ "@cite_61", "@cite_28", "@cite_21", "@cite_65", "@cite_52", "@cite_19", "@cite_49", "@cite_73", "@cite_25", "@cite_82" ], "mid": [ "2751836095", "2771751675", "", "2785366763", "2914037680", "2963137684", "2951104886", "", "2773706593", "2266822037" ], "abstract": [ "Different neural network architectures, hyperparameters and training protocols lead to different performances as a function of time. Human experts routinely inspect the resulting learning curves to quickly terminate runs with poor hyperparameter settings and thereby considerably speed up manual hyperparameter optimization. Exploiting the same information in automatic Bayesian hyperparameter optimization requires a probabilistic model of learning curves across hyperparameter settings. Here, we study the use of Bayesian neural networks for this purpose and improve their performance by a specialized learning curve layer.", "The quest for performant networks has been a significant force that drives the advancements of deep learning in recent years. While rewarding, improving network design has never been an easy journey. The large design space combined with the tremendous cost required for network training poses a major obstacle to this endeavor. In this work, we propose a new approach to this problem, namely, predicting the performance of a network before training, based on its architecture. Specifically, we develop a unified way to encode individual layers into vectors and bring them together to form an integrated description via LSTM. Taking advantage of the recurrent network's strong expressive power, this method can reliably predict the performances of various network architectures. Our empirical studies showed that it not only achieved accurate predictions but also produced consistent rankings across datasets -- a key desideratum in performance prediction.", "", "We propose Efficient Neural Architecture Search (ENAS), a fast and inexpensive approach for automatic model design. In ENAS, a controller learns to discover neural network architectures by searching for an optimal subgraph within a large computational graph. The controller is trained with policy gradient to select a subgraph that maximizes the expected reward on the validation set. Meanwhile the model corresponding to the selected subgraph is trained to minimize a canonical cross entropy loss. Thanks to parameter sharing between child models, ENAS is fast: it delivers strong empirical performances using much fewer GPU-hours than all existing automatic model design approaches, and notably, 1000x less expensive than standard Neural Architecture Search. On the Penn Treebank dataset, ENAS discovers a novel architecture that achieves a test perplexity of 55.8, establishing a new state-of-the-art among all methods without post-training processing. On the CIFAR-10 dataset, ENAS designs novel architectures that achieve a test error of 2.89 , which is on par with NASNet (, 2018), whose test error is 2.65 .", "The design of convolutional neural network architectures for a new image data set is a laborious and computational expensive task which requires expert knowledge. We propose a novel neuro-evolutionary technique to solve this problem without human interference. Our method assumes that a convolutional neural network architecture is a sequence of neuro-cells and keeps mutating them using function-preserving operations. This novel combination of approaches has several advantages. We define the network architecture by a sequence of repeating neuro-cells which reduces the search space complexity. Furthermore, these cells are possibly transferable and can be used in order to arbitrarily extend the complexity of the network. Mutations based on function-preserving operations guarantee better parameter initialization than random initialization such that less training time is required per network architecture. Our proposed method finds within 12 GPU hours neural network architectures that can achieve a classification error of about 4 and 24 with only 5.5 and 6.5 million parameters on CIFAR-10 and CIFAR-100, respectively. In comparison to competitor approaches, our method provides similar competitive results but requires orders of magnitudes less search time and in many cases less network parameters.", "Designing architectures for deep neural networks requires expert knowledge and substantial computation time. We propose a technique to accelerate architecture selection by learning an auxiliary HyperNet that generates the weights of a main model conditioned on that model's architecture. By comparing the relative validation performance of networks with HyperNet-generated weights, we can effectively search over a wide range of architectures at the cost of a single training run. To facilitate this search, we develop a flexible mechanism based on memory read-writes that allows us to define a wide range of network connectivity patterns, with ResNet, DenseNet, and FractalNet blocks as special cases. We validate our method (SMASH) on CIFAR-10 and CIFAR-100, STL-10, ModelNet10, and Imagenet32x32, achieving competitive performance with similarly-sized hand-designed networks.", "This paper addresses the scalability challenge of architecture search by formulating the task in a differentiable manner. Unlike conventional approaches of applying evolution or reinforcement learning over a discrete and non-differentiable search space, our method is based on the continuous relaxation of the architecture representation, allowing efficient search of the architecture using gradient descent. Extensive experiments on CIFAR-10, ImageNet, Penn Treebank and WikiText-2 show that our algorithm excels in discovering high-performance convolutional architectures for image classification and recurrent architectures for language modeling, while being orders of magnitude faster than state-of-the-art non-differentiable techniques. Our implementation has been made publicly available to facilitate further research on efficient architecture search algorithms.", "", "Techniques for automatically designing deep neural network architectures such as reinforcement learning based approaches have recently shown promising results. However, their success is based on vast computational resources (e.g. hundreds of GPUs), making them difficult to be widely used. A noticeable limitation is that they still design and train each network from scratch during the exploration of the architecture space, which is highly inefficient. In this paper, we propose a new framework toward efficient architecture search by exploring the architecture space based on the current network and reusing its weights. We employ a reinforcement learning agent as the meta-controller, whose action is to grow the network depth or layer width with function-preserving transformations. As such, the previously validated networks can be reused for further exploration, thus saves a large amount of computational cost. We apply our method to explore the architecture space of the plain convolutional neural networks (no skip-connections, branching etc.) on image benchmark datasets (CIFAR-10, SVHN) with restricted computational resources (5 GPUs). Our method can design highly competitive networks that outperform existing networks using the same design scheme. On CIFAR-10, our model without skip-connections achieves 4.23 test error rate, exceeding a vast majority of modern architectures and approaching DenseNet. Furthermore, by applying our method to explore the DenseNet architecture space, we are able to achieve more accurate networks with fewer parameters.", "Deep neural networks (DNNs) show very strong performance on many machine learning problems, but they are very sensitive to the setting of their hyperparameters. Automated hyperparameter optimization methods have recently been shown to yield settings competitive with those found by human experts, but their widespread adoption is hampered by the fact that they require more computational resources than human experts. Humans have one advantage: when they evaluate a poor hyperparameter setting they can quickly detect (after a few steps of stochastic gradient descent) that the resulting network performs poorly and terminate the corresponding evaluation to save time. In this paper, we mimic the early termination of bad runs using a probabilistic model that extrapolates the performance from the first part of a learning curve. Experiments with a broad range of neural network architectures on various prominent object recognition benchmarks show that our resulting approach speeds up state-of-the-art hyperparameter optimization methods for DNNs roughly twofold, enabling them to find DNN settings that yield better performance than those chosen by human experts." ] }
1901.06257
2910992039
Knowledge discovery from GPS trajectory data is an important topic in several scientific areas, including data mining, human behavior analysis, and user modeling. This paper proposes a task that assigns personalized visited-POIs. Its goal is to estimate fine-grained and pre-defined locations (i.e., points of interest (POI)) that are actually visited by users and assign visited-location information to the corresponding span of their (personal) GPS trajectories. We also introduce a novel algorithm to solve this assignment task. First, we exhaustively extract stay-points as candidates for significant locations using a variant of a conventional stay-point extraction method. Then we select significant locations and simultaneously assign visited-POIs to them by considering various aspects, which we formulate in integer linear programming. Experimental results conducted on an actual user dataset show that our method achieves higher accuracy in the visited-POI assignment task than the various cascaded procedures of conventional methods.
Many studies on GPS trajectory mining exist, such as user activity estimation @cite_3 @cite_2 @cite_17 @cite_16 , transportation mode detection @cite_4 @cite_43 , and region analysis @cite_37 @cite_0 . A typical approach to tackle these tasks first extracts stay-points as a clue for solving them. Therefore, we believe that stay-point extraction is a key technology of many GPS trajectory mining tasks.
{ "cite_N": [ "@cite_37", "@cite_4", "@cite_3", "@cite_0", "@cite_43", "@cite_2", "@cite_16", "@cite_17" ], "mid": [ "2075190119", "2143394441", "2110953678", "2153207204", "578585133", "1988545169", "2171805028", "2138460198" ], "abstract": [ "In this paper, we aim to estimate the similarity between users according to their GPS trajectories. Our approach first models a user's GPS trajectories with a semantic location history (SLH), e.g., shopping malls → restaurants → cinemas. Then, we measure the similarity between different users' SLHs by using our maximal travel match (MTM) algorithm. The advantage of our approach lies in two aspects. First, SLH carries more semantic meanings of a user's interests beyond low-level geographic positions. Second, our approach can estimate the similarity between two users without overlaps in the geographic spaces, e.g., people living in different cities. We evaluate our method based on a real-world GPS dataset collected by 109 users in a period of 1 year. As a result, SLH-MTM outperforms the related works [4].", "Geographic information has spawned many novel Web applications where global positioning system (GPS) plays important roles in bridging the applications and end users. Learning knowledge from users' raw GPS data can provide rich context information for both geographic and mobile applications. However, so far, raw GPS data are still used directly without much understanding. In this paper, an approach based on supervised learning is proposed to automatically infer transportation mode from raw GPS data. The transportation mode, such as walking, driving, etc., implied in a user's GPS data can provide us valuable knowledge to understand the user. It also enables context-aware computing based on user's present transportation mode and design of an innovative user interface for Web users. Our approach consists of three parts: a change point-based segmentation method, an inference model and a post-processing algorithm based on conditional probability. The change point-based segmentation method was compared with two baselines including uniform duration based and uniform length based methods. Meanwhile, four different inference models including Decision Tree, Bayesian Net, Support Vector Machine (SVM) and Conditional Random Field (CRF) are studied in the experiments. We evaluated the approach using the GPS data collected by 45 users over six months period. As a result, beyond other two segmentation methods, the change point based method achieved a higher degree of accuracy in predicting transportation modes and detecting transitions between them. Decision Tree outperformed other inference models over the change point based segmentation method.", "Even though human movement and mobility patterns have a high degree of freedom and variation, they also exhibit structural patterns due to geographic and social constraints. Using cell phone location data, as well as data from two online location-based social networks, we aim to understand what basic laws govern human motion and dynamics. We find that humans experience a combination of periodic movement that is geographically limited and seemingly random jumps correlated with their social networks. Short-ranged travel is periodic both spatially and temporally and not effected by the social network structure, while long-distance travel is more influenced by social network ties. We show that social relationships can explain about 10 to 30 of all human movement, while periodic behavior explains 50 to 70 . Based on our findings, we develop a model of human mobility that combines periodic short range movements with travel due to the social network structure. We show that our model reliably predicts the locations and dynamics of future human movement and gives an order of magnitude better performance than present models of human mobility.", "The development of a city gradually fosters different functional regions, such as educational areas and business districts. In this paper, we propose a framework (titled DRoF) that Discovers Regions of different Functions in a city using both human mobility among regions and points of interests (POIs) located in a region. Specifically, we segment a city into disjointed regions according to major roads, such as highways and urban express ways. We infer the functions of each region using a topic-based inference model, which regards a region as a document, a function as a topic, categories of POIs (e.g., restaurants and shopping malls) as metadata (like authors, affiliations, and key words), and human mobility patterns (when people reach leave a region and where people come from and leave for) as words. As a result, a region is represented by a distribution of functions, and a function is featured by a distribution of mobility patterns. We further identify the intensity of each function in different locations. The results generated by our framework can benefit a variety of applications, including urban planning, location choosing for a business, and social recommendations. We evaluated our method using large-scale and real-world datasets, consisting of two POI datasets of Beijing (in 2010 and 2011) and two 3-month GPS trajectory datasets (representing human mobility) generated by over 12,000 taxicabs in Beijing in 2010 and 2011 respectively. The results justify the advantages of our approach over baseline methods solely using POIs or human mobility.", "Spatial trajectories have been bringing the unprecedented wealth to a variety of research communities. A spatial trajectory records the paths of a variety of moving objects, such as people who log their travel routes with GPS trajectories. The field of moving objects related research has become extremely active within the last few years, especially with all major database and data mining conferences and journals. Computing with Spatial Trajectories introduces the algorithms, technologies, and systems used to process, manage and understand existing spatial trajectories for different applications. This book also presents an overview on both fundamentals and the state-of-the-art research inspired by spatial trajectory data, as well as a special focus on trajectory pattern mining, spatio-temporal data mining and location-based social networks. Each chapter provides readers with a tutorial-style introduction to one important aspect of location trajectory computing, case studies and many valuable references to other relevant research work. Computing with Spatial Trajectories is designed as a reference or secondary text book for advanced-level students and researchers mainly focused on computer science and geography. Professionals working on spatial trajectory computing will also find this book very useful.", "In this work, we discover the daily location-driven routines that are contained in a massive real-life human dataset collected by mobile phones. Our goal is the discovery and analysis of human routines that characterize both individual and group behaviors in terms of location patterns. We develop an unsupervised methodology based on two differing probabilistic topic models and apply them to the daily life of 97 mobile phone users over a 16-month period to achieve these goals. Topic models are probabilistic generative models for documents that identify the latent structure that underlies a set of words. Routines dominating the entire group's activities, identified with a methodology based on the Latent Dirichlet Allocation topic model, include “going to work late”, “going home early”, “working nonstop” and “having no reception (phone off)” at different times over varying time-intervals. We also detect routines which are characteristic of users, with a methodology based on the Author-Topic model. With the routines discovered, and the two methods of characterizing days and users, we can then perform various tasks. We use the routines discovered to determine behavioral patterns of users and groups of users. For example, we can find individuals that display specific daily routines, such as “going to work early” or “turning off the mobile (or having no reception) in the evenings”. We are also able to characterize daily patterns by determining the topic structure of days in addition to determining whether certain routines occur dominantly on weekends or weekdays. Furthermore, the routines discovered can be used to rank users or find subgroups of users who display certain routines. We can also characterize users based on their entropy. We compare our method to one based on clustering using K-means. Finally, we analyze an individual's routines over time to determine regions with high variations, which may correspond to specific events.", "Learning patterns of human behavior from sensor data is extremely important for high-level activity inference. This paper describes how to extract a person's activities and significant places from traces of GPS data. The system uses hierarchically structured conditional random fields to generate a consistent model of a person's activities and places. In contrast to existing techniques, this approach takes the high-level context into account in order to detect the significant places of a person. Experiments show significant improvements over existing techniques. Furthermore, they indicate that the proposed system is able to robustly estimate a person's activities using a model that is trained from data collected by other persons.", "In this paper we define a general framework for activity recognition by building upon and extending Relational Markov Networks. Using the example of activity recognition from location data, we show that our model can represent a variety of features including temporal information such as time of day, spatial information extracted from geographic databases, and global constraints such as the number of homes or workplaces of a person. We develop an efficient inference and learning technique based on MCMC. Using GPS location data collected by multiple people we show that the technique can accurately label a person's activity locations. Furthermore, we show that it is possible to learn good models from less data by using priors extracted from other people's data." ] }
1901.06257
2910992039
Knowledge discovery from GPS trajectory data is an important topic in several scientific areas, including data mining, human behavior analysis, and user modeling. This paper proposes a task that assigns personalized visited-POIs. Its goal is to estimate fine-grained and pre-defined locations (i.e., points of interest (POI)) that are actually visited by users and assign visited-location information to the corresponding span of their (personal) GPS trajectories. We also introduce a novel algorithm to solve this assignment task. First, we exhaustively extract stay-points as candidates for significant locations using a variant of a conventional stay-point extraction method. Then we select significant locations and simultaneously assign visited-POIs to them by considering various aspects, which we formulate in integer linear programming. Experimental results conducted on an actual user dataset show that our method achieves higher accuracy in the visited-POI assignment task than the various cascaded procedures of conventional methods.
Various stay-point extraction methods have already been proposed. For example, Ashbrook and Starner @cite_13 @cite_19 use a modified @math -means method, @cite_20 use DBSCAN @cite_9 , and @cite_39 employ Mean-Shift @cite_24 , all of which are based on clustering. @cite_25 and @cite_26 assume that stay-points are positions within a constant radius from a center where the stay time exceeds a constant time. More recently, @cite_23 developed a more robust stay-point extraction algorithm that considers outliers and missing points in GPS trajectories.
{ "cite_N": [ "@cite_26", "@cite_9", "@cite_39", "@cite_24", "@cite_19", "@cite_23", "@cite_13", "@cite_25", "@cite_20" ], "mid": [ "2140251882", "1673310716", "2072787293", "2022686119", "2009155608", "2779280457", "2168685189", "2071049788", "2063372627" ], "abstract": [ "The increasing availability of GPS-enabled devices is changing the way people interact with the Web, and brings us a large amount of GPS trajectories representing people's location histories. In this paper, based on multiple users' GPS trajectories, we aim to mine interesting locations and classical travel sequences in a given geospatial region. Here, interesting locations mean the culturally important places, such as Tiananmen Square in Beijing, and frequented public areas, like shopping malls and restaurants, etc. Such information can help users understand surrounding locations, and would enable travel recommendation. In this work, we first model multiple individuals' location histories with a tree-based hierarchical graph (TBHG). Second, based on the TBHG, we propose a HITS (Hypertext Induced Topic Search)-based inference model, which regards an individual's access on a location as a directed link from the user to that location. This model infers the interest of a location by taking into account the following three factors. 1) The interest of a location depends on not only the number of users visiting this location but also these users' travel experiences. 2) Users' travel experiences and location interests have a mutual reinforcement relationship. 3) The interest of a location and the travel experience of a user are relative values and are region-related. Third, we mine the classical travel sequences among locations considering the interests of these locations and users' travel experiences. We evaluated our system using a large GPS dataset collected by 107 users over a period of one year in the real world. As a result, our HITS-based inference model outperformed baseline approaches like rank-by-count and rank-by-frequency. Meanwhile, when considering the users' travel experiences and location interests, we achieved a better performance beyond baselines, such as rank-by-count and rank-by-interest, etc.", "Clustering algorithms are attractive for the task of class identification in spatial databases. However, the application to large spatial databases rises the following requirements for clustering algorithms: minimal requirements of domain knowledge to determine the input parameters, discovery of clusters with arbitrary shape and good efficiency on large databases. The well-known clustering algorithms offer no solution to the combination of these requirements. In this paper, we present the new clustering algorithm DBSCAN relying on a density-based notion of clusters which is designed to discover clusters of arbitrary shape. DBSCAN requires only one input parameter and supports the user in determining an appropriate value for it. We performed an experimental evaluation of the effectiveness and efficiency of DBSCAN using synthetic data and real data of the SEQUOIA 2000 benchmark. The results of our experiments demonstrate that (1) DBSCAN is significantly more effective in discovering clusters of arbitrary shape than the well-known algorithm CLARANS, and that (2) DBSCAN outperforms CLARANS by a factor of more than 100 in terms of efficiency.", "The ability to create geotagged photos enables people to share their personal experiences as tourists at specific locations and times. Assuming that the collection of each photographer's geotagged photos is a sequence of visited locations, photo-sharing sites are important sources for gathering the location histories of tourists. By following their location sequences, we can find representative and diverse travel routes that link key landmarks. In this paper, we propose a travel route recommendation method that makes use of the photographers' histories as held by Flickr. Recommendations are performed by our photographer behavior model, which estimates the probability of a photographer visiting a landmark. We incorporate user preference and present location information into the probabilistic behavior model by combining topic models and Markov models. We demonstrate the effectiveness of the proposed method using a real-life dataset holding information from 71,718 photographers taken in the United States in terms of the prediction accuracy of travel behavior.", "Mean shift, a simple interactive procedure that shifts each data point to the average of data points in its neighborhood is generalized and analyzed in the paper. This generalization makes some k-means like clustering algorithms its special cases. It is shown that mean shift is a mode-seeking process on the surface constructed with a \"shadow\" kernal. For Gaussian kernels, mean shift is a gradient mapping. Convergence is studied for mean shift iterations. Cluster analysis if treated as a deterministic problem of finding a fixed point of mean shift that characterizes the data. Applications in clustering and Hough transform are demonstrated. Mean shift is also considered as an evolutionary strategy that performs multistart global optimization. >", "Wearable computers have the potential to act as intelligent agents in everyday life and to assist the user in a variety of tasks, using context to determine how to act. Location is the most common form of context used by these agents to determine the user's task. However, another potential use of location context is the creation of a predictive model of the user's future movements. We present a system that automatically clusters GPS data taken over an extended period of time into meaningful locations at multiple scales. These locations are then incorporated into a Markov model that can be consulted for use with a variety of applications in both single-user and collaborative scenarios.", "We tackle the problem of extracting stay regions from a geospatial trajectory where a user has stayed longer than a certain time threshold. There are four major difficulties with this problem: (1) stay regions are not only point-type ones such as at a bus-stop but large and arbitrary-shaped ones such as at a shopping mall; (2) trajectories contain spatial outliers; (3) there are missing points in trajectories; and (4) trajectories should be analyzed in an online mode. Previous algorithms cannot overcome these difficulties simultaneously. Density-based batch algorithms have advantages over the previous algorithms in discovering of arbitrary-shaped clusters from spatial data containing outliers; however, they do not consider temporal durations and thus have not been used for extracting stay regions. We extended a density-based algorithm so that it would work in a duration-based manner online and have robustness to missing points in stay regions while keeping its advantages. Experiments on real trajectories of 13 users conducting their daily activities for three weeks demonstrated that our algorithm statistically significantly outperformed five state-of-the-art algorithms in terms of F1 score and works well without trajectory preprocessing consisting of filtering, interpolating, and smoothing.", "Wearable computers have the potential to act as intelligent agents in everyday life and assist the user in a variety of tasks, using context to determine how to act. Location is the most common form of context used by these agents to determine the user's task. However, another potential use of location context is the creation of a predictive model of the user's future movements. We present a system that automatically clusters GPS data taken over an extended period of time into meaningful locations at multiple scales. These locations are then incorporated into a Markov model that can be consulted for use with a variety of applications in both single-user and collaborative scenarios.", "Location-aware systems are proliferating on a variety of platforms from laptops to cell phones. Though these systems offer two principal representations in which to work with location (coordinates and landmarks) they do not offer a means for working with the user-level notion of \"place\". A place is a locale that is important to a user and which carries a particular semantic meaning such as \"my place of work\", \"the place we live\", or \"My favorite lunch spot\". Mobile devices can make more intelligent decisions about how to behave when they are equipped with this higher-level information. For example, a cell phone can switch to a silent mode when its owner enters a place where a ringer is inappropriate (e.g., a movie theater, a lecture hall, a place for personal reflection.) In this paper, we describe an algorithm for extracting significant places from a trace of coordinates. Furthermore, we experimentally evaluate the algorithm with real, long-term data collected from three participants using a Place Lab client [15], a software client that computes location coordinates by listening for RF-emissions from known radio beacons in the environment (e.g. 802.11 access points, GSM cell towers).", "Personal media collections are often viewed and managed along the social dimension, the places we spend time at and the people we see, thus tools for extracting and using this information are required. We present novel algorithms for identifying socially significant places termed social spheres unobtrusively from GPS traces of daily life, and label them as one of Home, Work, or Other, with quantitative evaluation of 9 months taken from 5 users. We extract locational co-presence of these users and formulate a novel measure of social tie strength based on frequency of interaction, and the nature of spheres it occurs within. Comparative user studies of a multimedia browser designed to demonstrate the utility of social metadata indicate the usefulness of a simple interface allowing navigation and filtering in these terms. We note the application of social context is potentially much broader than personal media management, including context-aware device behaviour, life logs, social networks, and location-aware information services." ] }
1901.06257
2910992039
Knowledge discovery from GPS trajectory data is an important topic in several scientific areas, including data mining, human behavior analysis, and user modeling. This paper proposes a task that assigns personalized visited-POIs. Its goal is to estimate fine-grained and pre-defined locations (i.e., points of interest (POI)) that are actually visited by users and assign visited-location information to the corresponding span of their (personal) GPS trajectories. We also introduce a novel algorithm to solve this assignment task. First, we exhaustively extract stay-points as candidates for significant locations using a variant of a conventional stay-point extraction method. Then we select significant locations and simultaneously assign visited-POIs to them by considering various aspects, which we formulate in integer linear programming. Experimental results conducted on an actual user dataset show that our method achieves higher accuracy in the visited-POI assignment task than the various cascaded procedures of conventional methods.
The challenge that most resembles our personalized visited-POI assignment task is detecting semantic locations from GPS trajectory data @cite_22 @cite_28 @cite_26 . @cite_22 extracted stay-points from trajectories and combined them with street addresses obtained by a reverse geocoder. Their method assigns a semantic label to stay-points by yellow-pages. This strategy resembles a nearest neighbor assignment to stay-points by a POI database. @cite_28 also extracted semantic locations from GPS trajectories in the same manner. @cite_26 extracted stay-points from user trajectories and applied a hierarchical clustering algorithm to combine stay-points to create hierarchical stay areas on a diagram called a tree-based hierarchical graph. The key difference between our personalized visited-POI assignment task and a semantic location detection task is that the semantic location is essentially determined on the basis of stay-points, while in this paper we determine a visited-POI on the basis of whether the user actually visits it.
{ "cite_N": [ "@cite_28", "@cite_26", "@cite_22" ], "mid": [ "2122338787", "2140251882", "2067193733" ], "abstract": [ "With help of context, computer systems and applications could be more user-friendly, flexible and adaptable. With semantic locations, applications can understand users better or provide helpful services. We propose a method that automatically derives semantic locations from user’s trace. Our experimental results show that the proposed method identities up to 96 correct semantic locations.", "The increasing availability of GPS-enabled devices is changing the way people interact with the Web, and brings us a large amount of GPS trajectories representing people's location histories. In this paper, based on multiple users' GPS trajectories, we aim to mine interesting locations and classical travel sequences in a given geospatial region. Here, interesting locations mean the culturally important places, such as Tiananmen Square in Beijing, and frequented public areas, like shopping malls and restaurants, etc. Such information can help users understand surrounding locations, and would enable travel recommendation. In this work, we first model multiple individuals' location histories with a tree-based hierarchical graph (TBHG). Second, based on the TBHG, we propose a HITS (Hypertext Induced Topic Search)-based inference model, which regards an individual's access on a location as a directed link from the user to that location. This model infers the interest of a location by taking into account the following three factors. 1) The interest of a location depends on not only the number of users visiting this location but also these users' travel experiences. 2) Users' travel experiences and location interests have a mutual reinforcement relationship. 3) The interest of a location and the travel experience of a user are relative values and are region-related. Third, we mine the classical travel sequences among locations considering the interests of these locations and users' travel experiences. We evaluated our system using a large GPS dataset collected by 107 users over a period of one year in the real world. As a result, our HITS-based inference model outperformed baseline approaches like rank-by-count and rank-by-frequency. Meanwhile, when considering the users' travel experiences and location interests, we achieved a better performance beyond baselines, such as rank-by-count and rank-by-interest, etc.", "With the increasing deployment and use of GPS-enabled devices, massive amounts of GPS data are becoming available. We propose a general framework for the mining of semantically meaningful, significant locations, e.g., shopping malls and restaurants, from such data. We present techniques capable of extracting semantic locations from GPS data. We capture the relationships between locations and between locations and users with a graph. Significance is then assigned to locations using random walks over the graph that propagates significance among the locations. In doing so, mutual reinforcement between location significance and user authority is exploited for determining significance, as are aspects such as the number of visits to a location, the durations of the visits, and the distances users travel to reach locations. Studies using up to 100 million GPS records from a confined spatio-temporal region demonstrate that the proposal is effective and is capable of outperforming baseline methods and an extension of an existing proposal." ] }
1901.06257
2910992039
Knowledge discovery from GPS trajectory data is an important topic in several scientific areas, including data mining, human behavior analysis, and user modeling. This paper proposes a task that assigns personalized visited-POIs. Its goal is to estimate fine-grained and pre-defined locations (i.e., points of interest (POI)) that are actually visited by users and assign visited-location information to the corresponding span of their (personal) GPS trajectories. We also introduce a novel algorithm to solve this assignment task. First, we exhaustively extract stay-points as candidates for significant locations using a variant of a conventional stay-point extraction method. Then we select significant locations and simultaneously assign visited-POIs to them by considering various aspects, which we formulate in integer linear programming. Experimental results conducted on an actual user dataset show that our method achieves higher accuracy in the visited-POI assignment task than the various cascaded procedures of conventional methods.
POI recommendation tasks are closely related to our target task. Many previous studies have addressed POI recommendations @cite_29 @cite_31 @cite_14 @cite_27 @cite_33 . Most used the collaborative filtering (CF) approach, which requires inter-user information, to achieve recommendations. @cite_31 performed co-clustering on users and stay-points to improve CF recommendations. @cite_27 proposed a framework that fuses user preferences to a POI with both social and geographical influences. @cite_14 showed that the most frequently used check-in history in location-based social networks is first-visit POIs and proposed a personalized PageRank-based method to improve the accuracy of estimating first-visit POI recommendations. @cite_33 introduced a time-aware feature into a CF-based approach and showed that incorporating temporal and spatial influences improves the accuracy of POI recommendations. These studies exploit other user check-in histories to recommend POIs to users. @cite_29 studied location recommendation with a location category hierarchy and concluded that since different users have varying levels of expertise and preferences, they should be treated differently in the recommendation process.
{ "cite_N": [ "@cite_14", "@cite_33", "@cite_29", "@cite_27", "@cite_31" ], "mid": [ "2112631146", "2073013176", "2139809240", "2087692915", "2009799282" ], "abstract": [ "This paper studies the problem of recommending new venues to users who participate in location-based social networks (LBSNs). As an increasingly larger number of users partake in LBSNs, the recommendation problem in this setting has attracted significant attention in research and in practical applications. The detailed information about past user behavior that is traced by the LBSN differentiates the problem significantly from its traditional settings. The spatial nature in the past user behavior and also the information about the user social interaction with other users, provide a richer background to build a more accurate and expressive recommendation model. Although there have been extensive studies on recommender systems working with user-item ratings, GPS trajectories, and other types of data, there are very few approaches that exploit the unique properties of the LBSN user check-in data. In this paper, we propose algorithms that create recommendations based on four factors: a) past user behavior (visited places), b) the location of each venue, c) the social relationships among the users, and d) the similarity between users. The proposed algorithms outperform traditional recommendation algorithms and other approaches that try to exploit LBSN information. To design our recommendation algorithms we study the properties of two real LBSNs, Brightkite and Gowalla, and analyze the relation between users and visited locations. An experimental evaluation using data from these LBSNs shows that the exploitation of the additional geographical and social information allows our proposed techniques to outperform the current state of the art.", "The availability of user check-in data in large volume from the rapid growing location based social networks (LBSNs) enables many important location-aware services to users. Point-of-interest (POI) recommendation is one of such services, which is to recommend places where users have not visited before. Several techniques have been recently proposed for the recommendation service. However, no existing work has considered the temporal information for POI recommendations in LBSNs. We believe that time plays an important role in POI recommendations because most users tend to visit different places at different time in a day, visiting a restaurant at noon and visiting a bar at night. In this paper, we define a new problem, namely, the time-aware POI recommendation, to recommend POIs for a given user at a specified time in a day. To solve the problem, we develop a collaborative recommendation model that is able to incorporate temporal information. Moreover, based on the observation that users tend to visit nearby POIs, we further enhance the recommendation model by considering geographical information. Our experimental results on two real-world datasets show that the proposed approach outperforms the state-of-the-art POI recommendation methods substantially.", "The popularity of location-based social networks provide us with a new platform to understand users' preferences based on their location histories. In this paper, we present a location-based and preference-aware recommender system that offers a particular user a set of venues (such as restaurants) within a geospatial range with the consideration of both: 1) User preferences, which are automatically learned from her location history and 2) Social opinions, which are mined from the location histories of the local experts. This recommender system can facilitate people's travel not only near their living areas but also to a city that is new to them. As a user can only visit a limited number of locations, the user-locations matrix is very sparse, leading to a big challenge to traditional collaborative filtering-based location recommender systems. The problem becomes even more challenging when people travel to a new city. To this end, we propose a novel location recommender system, which consists of two main parts: offline modeling and online recommendation. The offline modeling part models each individual's personal preferences with a weighted category hierarchy (WCH) and infers the expertise of each user in a city with respect to different category of locations according to their location histories using an iterative learning model. The online recommendation part selects candidate local experts in a geospatial range that matches the user's preferences using a preference-aware candidate selection algorithm and then infers a score of the candidate locations based on the opinions of the selected local experts. Finally, the top-k ranked locations are returned as the recommendations for the user. We evaluated our system with a large-scale real dataset collected from Foursquare. The results confirm that our method offers more effective recommendations than baselines, while having a good efficiency of providing location recommendations.", "In this paper, we aim to provide a point-of-interests (POI) recommendation service for the rapid growing location-based social networks (LBSNs), e.g., Foursquare, Whrrl, etc. Our idea is to explore user preference, social influence and geographical influence for POI recommendations. In addition to deriving user preference based on user-based collaborative filtering and exploring social influence from friends, we put a special emphasis on geographical influence due to the spatial clustering phenomenon exhibited in user check-in activities of LBSNs. We argue that the geographical influence among POIs plays an important role in user check-in behaviors and model it by power law distribution. Accordingly, we develop a collaborative recommendation algorithm based on geographical influence based on naive Bayesian. Furthermore, we propose a unified POI recommendation framework, which fuses user preference to a POI with social influence and geographical influence. Finally, we conduct a comprehensive performance evaluation over two large-scale datasets collected from Foursquare and Whrrl. Experimental results with these real datasets show that the unified collaborative recommendation approach significantly outperforms a wide spectrum of alternative recommendation approaches.", "GPS data tracked on mobile devices contains rich information about human activities and preferences. In this paper, GPS data is used in location-based services (LBSs) to provide collaborative location recommendations. We observe that most existing LBSs provide location recommendations by clustering the User-Location matrix. Since the User-Location matrix created based on GPS data is huge, there are two major problems with these methods. First, the number of similar locations that need to be considered in computing the recommendations can be numerous. As a result, the identification of truly relevant locations from numerous candidates is challenging. Second, the clustering process on large matrix is time consuming. Thus, when new GPS data arrives, complete re-clustering of the whole matrix is infeasible. To tackle these two problems, we propose the Collaborative Location Recommendation (CLR) framework for location recommendation. By considering activities (i.e., temporal preferences) and different user classes (i.e., Pattern Users, Normal Users, and Travelers) in the recommendation process, CLR is capable of generating more precise and refined recommendations to the users compared to the existing methods. Moreover, CLR employs a dynamic clustering algorithm CADC to cluster the trajectory data into groups of similar users, similar activities and similar locations efficiently by supporting incremental update of the groups when new GPS trajectory data arrives. We evaluate CLR with a real-world GPS dataset, and confirm that the CLR framework provides more accurate location recommendations compared to the existing methods." ] }
1901.06257
2910992039
Knowledge discovery from GPS trajectory data is an important topic in several scientific areas, including data mining, human behavior analysis, and user modeling. This paper proposes a task that assigns personalized visited-POIs. Its goal is to estimate fine-grained and pre-defined locations (i.e., points of interest (POI)) that are actually visited by users and assign visited-location information to the corresponding span of their (personal) GPS trajectories. We also introduce a novel algorithm to solve this assignment task. First, we exhaustively extract stay-points as candidates for significant locations using a variant of a conventional stay-point extraction method. Then we select significant locations and simultaneously assign visited-POIs to them by considering various aspects, which we formulate in integer linear programming. Experimental results conducted on an actual user dataset show that our method achieves higher accuracy in the visited-POI assignment task than the various cascaded procedures of conventional methods.
Other studies on a location naming task @cite_18 @cite_36 and a POI recommendation task @cite_15 use a supervised learning algorithm @cite_10 @cite_44 to build POI ranking models. To formalize their problems as a ranking challenge, they look at a location (i.e., longitude and latitude) as a query and a user's check-in data to it as relevant labels. Their methods utilize user history, the statistics of POIs in a check-in service, and other information to generate features. Then the ranking model uses the features to rank POI candidates. The key difference between a visited-POI assignment task and a POI recommendation task is the latter's requirement for significant location extraction. Previous POI recommendation studies assume that significant locations are given, but our visited-POI assignment task does not. One straightforward approach is cascading a stay-point extraction algorithm and a POI recommendation method. We regard the nearest neighbor method @cite_22 and learning-to-rank methods @cite_18 @cite_15 as similar approaches to our proposed method We do not consider the method by @cite_36 to be a similar method because it requires check-in histories of many users to calculate latent topic features. Its other part is equivalent to another previous work @cite_18 . .
{ "cite_N": [ "@cite_18", "@cite_22", "@cite_36", "@cite_44", "@cite_15", "@cite_10" ], "mid": [ "2082576493", "2067193733", "2029550988", "2149427297", "2026532078", "2108862644" ], "abstract": [ "Many innovative location-based services have been established in order to facilitate users' everyday lives. Usually, these services cannot obtain location names automatically from users' GPS coordinates to claim their current locations. In this paper, we propose a novel location naming approach, which can provide concrete and meaningful location names to users based on their current location, time and check-in histories. In particular, when users input a GPS point, they will receive a ranked list of Points of Interest which shows the most possible semantic names for that location. In our approach, we draw an analogy between the location naming problem and the location-based search problem. We proposed a local search framework to integrate different kinds of popularity factors and personal preferences. After identifying important features by feature selection, we apply learning-to-rank technique to weight them and build our system based on 31811 check-in records from 545 users. By evaluating on this dataset, our approach is shown to be effective in automatically naming users' locations. 64.5 of test queries can return the intended location names within the top 5 results.", "With the increasing deployment and use of GPS-enabled devices, massive amounts of GPS data are becoming available. We propose a general framework for the mining of semantically meaningful, significant locations, e.g., shopping malls and restaurants, from such data. We present techniques capable of extracting semantic locations from GPS data. We capture the relationships between locations and between locations and users with a graph. Significance is then assigned to locations using random walks over the graph that propagates significance among the locations. In doing so, mutual reinforcement between location significance and user authority is exploited for determining significance, as are aspects such as the number of visits to a location, the durations of the visits, and the distances users travel to reach locations. Studies using up to 100 million GPS records from a confined spatio-temporal region demonstrate that the proposal is effective and is capable of outperforming baseline methods and an extension of an existing proposal.", "Many innovative location-based services have been established to offer users greater convenience in their everyday lives. These services usually cannot map user's physical locations into semantic names automatically. The semantic names of locations provide important context for mobile recommendations and advertisements. In this article, we proposed a novel location naming approach which can automatically provide semantic names for users given their locations and time. In particular, when a user opens a GPS device and submits a query with her physical location and time, she will be returned the most appropriate semantic name. In our approach, we drew an analogy between location naming and local search, and designed a local search framework to propose a spatiotemporal and user preference (STUP) model for location naming. STUP combined three components, user preference (UP), spatial preference (SP), and temporal preference (TP), by leveraging learning-to-rank techniques. We evaluated STUP on 466,190 check-ins of 5,805 users from Shanghai and 135,052 check-ins of 1,361 users from Beijing. The results showed that SP was most effective among three components and that UP can provide personalized semantic names, and thus it was a necessity for location naming. Although TP was not as discriminative as the others, it can still be beneficial when integrated with SP and UP. Finally, according to the experimental results, STUP outperformed the proposed baselines and returned accurate semantic names for 23.6p and 26.6p of the testing queries from Beijing and Shanghai, respectively.", "Learning to rank for Information Retrieval (IR) is a task to automatically construct a ranking model using training data, such that the model can sort new objects according to their degrees of relevance, preference, or importance. Many IR problems are by nature ranking problems, and many IR technologies can be potentially enhanced by using learning-to-rank techniques. The objective of this tutorial is to give an introduction to this research direction. Specifically, the existing learning-to-rank algorithms are reviewed and categorized into three approaches: the pointwise, pairwise, and listwise approaches. The advantages and disadvantages with each approach are analyzed, and the relationships between the loss functions used in these approaches and IR evaluation measures are discussed. Then the empirical evaluations on typical learning-to-rank methods are shown, with the LETOR collection as a benchmark dataset, which seems to suggest that the listwise approach be the most effective one among all the approaches. After that, a statistical ranking theory is introduced, which can describe different learning-to-rank algorithms, and be used to analyze their query-level generalization abilities. At the end of the tutorial, we provide a summary and discuss potential future work on learning to rank.", "In this article we consider the problem of mapping a noisy estimate of a user's current location to a semantically meaningful point of interest, such as a home, restaurant, or store. Despite the poor accuracy of GPS on current mobile devices and the relatively high density of places in urban areas, it is possible to predict a user's location with considerable precision by explicitly modeling both places and users and by combining a variety of signals about a user's current context. Places are often simply modeled as a single latitude and longitude when in fact they are complex entities existing in both space and time and shaped by the millions of people that interact with them. Similarly, models of users reveal complex but predictable patterns of mobility that can be exploited for this task. We propose a novel spatial search algorithm that infers a user's location by combining aggregate signals mined from billions of foursquare check-ins with real-time contextual information. We evaluate a variety of techniques and demonstrate that machine learning algorithms for ranking and spatiotemporal models of places and users offer significant improvement over common methods for location search based on distance and popularity.", "The paper is concerned with learning to rank, which is to construct a model or a function for ranking objects. Learning to rank is useful for document retrieval, collaborative filtering, and many other applications. Several methods for learning to rank have been proposed, which take object pairs as 'instances' in learning. We refer to them as the pairwise approach in this paper. Although the pairwise approach offers advantages, it ignores the fact that ranking is a prediction task on list of objects. The paper postulates that learning to rank should adopt the listwise approach in which lists of objects are used as 'instances' in learning. The paper proposes a new probabilistic method for the approach. Specifically it introduces two probability models, respectively referred to as permutation probability and top k probability, to define a listwise loss function for learning. Neural Network and Gradient Descent are then employed as model and algorithm in the learning method. Experimental results on information retrieval show that the proposed listwise approach performs better than the pairwise approach." ] }
1901.06257
2910992039
Knowledge discovery from GPS trajectory data is an important topic in several scientific areas, including data mining, human behavior analysis, and user modeling. This paper proposes a task that assigns personalized visited-POIs. Its goal is to estimate fine-grained and pre-defined locations (i.e., points of interest (POI)) that are actually visited by users and assign visited-location information to the corresponding span of their (personal) GPS trajectories. We also introduce a novel algorithm to solve this assignment task. First, we exhaustively extract stay-points as candidates for significant locations using a variant of a conventional stay-point extraction method. Then we select significant locations and simultaneously assign visited-POIs to them by considering various aspects, which we formulate in integer linear programming. Experimental results conducted on an actual user dataset show that our method achieves higher accuracy in the visited-POI assignment task than the various cascaded procedures of conventional methods.
More recently, several novel tasks have been proposed that are related to visited-POI assignments. For example, @cite_34 characterized the life cycle of POIs and investigated the POI evolution process over time. Espin- @cite_6 tackled a task that clusters users based on spatio-temporal dimensions with a non-negative tensor factorization method. They clearly focus on different targets.
{ "cite_N": [ "@cite_34", "@cite_6" ], "mid": [ "2507653743", "2338768639" ], "abstract": [ "A Point of Interest (POI) refers to a specific location that people may find useful or interesting. While a large body of research has been focused on identifying and recommending POIs, there are few studies on characterizing the life cycle of POIs. Indeed, a comprehensive understanding of POI life cycle can be helpful for various tasks, such as urban planning, business site selection, and real estate evaluation. In this paper, we develop a framework, named POLIP, for characterizing the POI life cycle with multiple data sources. Specifically, to investigate the POI evolution process over time, we first formulate a serial classification problem to predict the life status of POIs. The prediction approach is designed to integrate two important perspectives: 1) the spatial-temporal dependencies associated with the prosperity of POIs, and 2) the human mobility dynamics hidden in the citywide taxicab data related to the POIs at multiple granularity levels. In addition, based on the predicted life statuses in successive time windows for a given POI, we design an algorithm to characterize its life cycle. Finally, we performed extensive experiments using large-scale and real-world datasets. The results demonstrate the feasibility in automatic characterizing POI life cycle and shed important light on future research directions.", "Nowadays, human movement in urban spaces can be traced digitally in many cases. It can be observed that movement patterns are not constant, but vary across time and space. In this work, we characterize such spatio-temporal patterns with an innovative combination of two separate approaches that have been utilized for studying human mobility in the past. First, by using non-negative tensor factorization (NTF), we are able to cluster human behavior based on spatio-temporal dimensions. Second, for characterizing these clusters, we propose to use HypTrails, a Bayesian approach for expressing and comparing hypotheses about human trails. To formalize hypotheses, we utilize publicly available Web data (i.e., Foursquare and census data). By studying taxi data in Manhattan, we can discover and characterize human mobility patterns that cannot be identified in a collective analysis. As one example, we find a group of taxi rides that end at locations with a high number of party venues on weekend nights. Our findings argue for a more fine-grained analysis of human mobility in order to make informed decisions for e.g., enhancing urban structures, tailored traffic control and location-based recommender systems." ] }
1901.06257
2910992039
Knowledge discovery from GPS trajectory data is an important topic in several scientific areas, including data mining, human behavior analysis, and user modeling. This paper proposes a task that assigns personalized visited-POIs. Its goal is to estimate fine-grained and pre-defined locations (i.e., points of interest (POI)) that are actually visited by users and assign visited-location information to the corresponding span of their (personal) GPS trajectories. We also introduce a novel algorithm to solve this assignment task. First, we exhaustively extract stay-points as candidates for significant locations using a variant of a conventional stay-point extraction method. Then we select significant locations and simultaneously assign visited-POIs to them by considering various aspects, which we formulate in integer linear programming. Experimental results conducted on an actual user dataset show that our method achieves higher accuracy in the visited-POI assignment task than the various cascaded procedures of conventional methods.
@cite_41 focused on the periodic behaviors of users and formalized POI check-in patterns as a stochastic point process. An interesting aspect of their method is that they take into account a factor of the influence of the close friends of users. In contrast, our task detects actual visited-POIs from obtained raw GPS trajectories and POI information, which includes the user's periodic behaviors without being limited to them. Therefore, our task indirectly includes their task, even though it does not specifically focus on periodic behaviors.
{ "cite_N": [ "@cite_41" ], "mid": [ "2549560474" ], "abstract": [ "Social networks are getting closer to our real physical world. People share the exact location and time of their check-ins and are influenced by their friends. Modeling the spatio-temporal behavior of users in social networks is of great importance for predicting the future behavior of users, controlling the users' movements, and finding the latent influence network. It is observed that users have periodic patterns in their movements. Also, they are influenced by the locations that their close friends recently visited. Leveraging these two observations, we propose a probabilistic model based on a doubly stochastic point process with a periodic decaying kernel for the time of check-ins and a time-varying multinomial distribution for the location of check-ins of users in the location-based social networks. We learn the model parameters using an efficient EM algorithm, which distributes over the users. Experiments on synthetic and real data gathered from Foursquare show that the proposed inference algorithm learns the parameters efficiently and our model outperforms the other alternatives in the prediction of time and location of check-ins." ] }
1901.06257
2910992039
Knowledge discovery from GPS trajectory data is an important topic in several scientific areas, including data mining, human behavior analysis, and user modeling. This paper proposes a task that assigns personalized visited-POIs. Its goal is to estimate fine-grained and pre-defined locations (i.e., points of interest (POI)) that are actually visited by users and assign visited-location information to the corresponding span of their (personal) GPS trajectories. We also introduce a novel algorithm to solve this assignment task. First, we exhaustively extract stay-points as candidates for significant locations using a variant of a conventional stay-point extraction method. Then we select significant locations and simultaneously assign visited-POIs to them by considering various aspects, which we formulate in integer linear programming. Experimental results conducted on an actual user dataset show that our method achieves higher accuracy in the visited-POI assignment task than the various cascaded procedures of conventional methods.
@cite_30 proposed a sequential personalized spatial item recommendation framework (SPORE), which recommends a sequence of POIs based on individual POI-visit histories. Their target closely resembles ours. However, the essential difference is that their task assumes a sequence of check-in records as input, unlike raw GPS trajectories for our case. This means that their method does not assume that an input sequence (check-in records) contains any false positive information, which is one of the main challenges of our task. In addition, SPORE, their proposed algorithm, cannot be directly applied to GPS trajectories since it does not have a mechanism that removes false positive stay-points, while our method can remove such meaningless stay-points.
{ "cite_N": [ "@cite_30" ], "mid": [ "2434565296" ], "abstract": [ "With the rapid development of location-based social networks (LBSNs), spatial item recommendation has become an important way of helping users discover interesting locations to increase their engagement with location-based services. Although human movement exhibits sequential patterns in LBSNs, most current studies on spatial item recommendations do not consider the sequential influence of locations. Leveraging sequential patterns in spatial item recommendation is, however, very challenging, considering 1) users' check-in data in LBSNs has a low sampling rate in both space and time, which renders existing prediction techniques on GPS trajectories ineffective; 2) the prediction space is extremely large, with millions of distinct locations as the next prediction target, which impedes the application of classical Markov chain models; and 3) there is no existing framework that unifies users' personal interests and the sequential influence in a principled manner. In light of the above challenges, we propose a sequential personalized spatial item recommendation framework (SPORE) which introduces a novel latent variable topic-region to model and fuse sequential influence with personal interests in the latent and exponential space. The advantages of modeling the sequential effect at the topic-region level include a significantly reduced prediction space, an effective alleviation of data sparsity and a direct expression of the semantic meaning of users' spatial activities. Furthermore, we design an asymmetric Locality Sensitive Hashing (ALSH) technique to speed up the online top-k recommendation process by extending the traditional LSH. We evaluate the performance of SPORE on two real datasets and one large-scale synthetic dataset. The results demonstrate a significant improvement in SPORE's ability to recommend spatial items, in terms of both effectiveness and efficiency, compared with the state-of-the-art methods." ] }
1901.06257
2910992039
Knowledge discovery from GPS trajectory data is an important topic in several scientific areas, including data mining, human behavior analysis, and user modeling. This paper proposes a task that assigns personalized visited-POIs. Its goal is to estimate fine-grained and pre-defined locations (i.e., points of interest (POI)) that are actually visited by users and assign visited-location information to the corresponding span of their (personal) GPS trajectories. We also introduce a novel algorithm to solve this assignment task. First, we exhaustively extract stay-points as candidates for significant locations using a variant of a conventional stay-point extraction method. Then we select significant locations and simultaneously assign visited-POIs to them by considering various aspects, which we formulate in integer linear programming. Experimental results conducted on an actual user dataset show that our method achieves higher accuracy in the visited-POI assignment task than the various cascaded procedures of conventional methods.
@cite_32 proposed a task that detects personally semantic places from GPS trajectories. Their proposed task also appears to closely resemble ours. However, their target is to detect places ( frequently visited by an individual user) that might have such important semantic meanings as home or office. In this perspective, their target is closely related to @cite_41 , as explained above. In contrast, our proposed task detects not only frequently visited places like homes and offices but also every POI that the user actually visits regardless of the frequency.
{ "cite_N": [ "@cite_41", "@cite_32" ], "mid": [ "2549560474", "2039001267" ], "abstract": [ "Social networks are getting closer to our real physical world. People share the exact location and time of their check-ins and are influenced by their friends. Modeling the spatio-temporal behavior of users in social networks is of great importance for predicting the future behavior of users, controlling the users' movements, and finding the latent influence network. It is observed that users have periodic patterns in their movements. Also, they are influenced by the locations that their close friends recently visited. Leveraging these two observations, we propose a probabilistic model based on a doubly stochastic point process with a periodic decaying kernel for the time of check-ins and a time-varying multinomial distribution for the location of check-ins of users in the location-based social networks. We learn the model parameters using an efficient EM algorithm, which distributes over the users. Experiments on synthetic and real data gathered from Foursquare show that the proposed inference algorithm learns the parameters efficiently and our model outperforms the other alternatives in the prediction of time and location of check-ins.", "A place is a locale that is frequently visited by an individual user and carries important semantic meanings (e.g. home, work, etc.). Many location-aware applications will be greatly enhanced with the ability of the automatic discovery of personally semantic places. The discovery of a user's personally semantic places involves obtaining the physical locations and semantic meanings of these places. In this paper, we propose approaches to address both of the problems. For the physical place extraction problem, a hierarchical clustering algorithm is proposed to firstly extract visit points from the GPS trajectories, and then these visit points can be clustered to form physical places. For the semantic place recognition problem, Bayesian networks (encoding the temporal patterns in which the places are visited) are used in combination with a customized POI (i.e. place of interest) database (containing the spatial features of the places) to categorize the extracted physical places into pre-defined types. An extensive set of experiments have been conducted to demonstrate the effectiveness of the proposed approaches based on a dataset of real-world GPS trajectories." ] }
1901.06257
2910992039
Knowledge discovery from GPS trajectory data is an important topic in several scientific areas, including data mining, human behavior analysis, and user modeling. This paper proposes a task that assigns personalized visited-POIs. Its goal is to estimate fine-grained and pre-defined locations (i.e., points of interest (POI)) that are actually visited by users and assign visited-location information to the corresponding span of their (personal) GPS trajectories. We also introduce a novel algorithm to solve this assignment task. First, we exhaustively extract stay-points as candidates for significant locations using a variant of a conventional stay-point extraction method. Then we select significant locations and simultaneously assign visited-POIs to them by considering various aspects, which we formulate in integer linear programming. Experimental results conducted on an actual user dataset show that our method achieves higher accuracy in the visited-POI assignment task than the various cascaded procedures of conventional methods.
@cite_40 employed a Bayesian network to detect the categories of visited-POIs, such as hospitals and universities, from the GPS trajectories of vehicles. Their motivation is closely related to ours. The essential difference is that they only detect the categories of visited-POIs; we detect the visited-POIs themselves. Additionally, they used vehicles' GPS trajectories, whereas we target the trajectories obtained from the mobile devices of users. Thus, our challenge is much more complicated.
{ "cite_N": [ "@cite_40" ], "mid": [ "2614175038" ], "abstract": [ "Identifying visited points of interest (PoIs) from vehicle trajectories remains an open problem that is difficult due to vehicles parking often at some distance from the visited PoI and due to some regions having a high PoI density. We propose a visited PoI extraction (VPE) method that identifies visited PoIs using a Bayesian network. The method considers stay duration, weekday, arrival time, and PoI category to compute the probability that a PoI is visited. We also provide a method to generate labeled data from unlabeled GPS trajectories. An experimental evaluation shows that VPE achieves a precision@3 value of 0.8, indicating that VPE is able to model the relationship between the temporal features of a stop and the category of the visited PoI." ] }
1907.08015
2956604637
The evolution and development of events have their own basic principles, which make events happen sequentially. Therefore, the discovery of such evolutionary patterns among events are of great value for event prediction, decision-making and scenario design of dialog systems. However, conventional knowledge graph mainly focuses on the entities and their relations, which neglects the real world events. In this paper, we present a novel type of knowledge base - Event Logic Graph (ELG), which can reveal evolutionary patterns and development logics of real world events. Specifically, ELG is a directed cyclic graph, whose nodes are events, and edges stand for the sequential, causal or hypernym-hyponym (is-a) relations between events. We constructed two domain ELG: financial domain ELG, which consists of more than 1.5 million of event nodes and more than 1.8 million of directed edges, and travel domain ELG, which consists of about 30 thousand of event nodes and more than 234 thousand of directed edges. Experimental results show that ELG is effective for the task of script event prediction.
The most relevant research area with ELG is script learning. The use of scripts in AI dates back to the 1970s @cite_4 @cite_17 . In this study, are an influential early encoding of situation-specific world event. In recent years, a growing body of research has investigated statistical script learning. , proposed unsupervised induction of from raw newswire text, with as the evaluation metric. , used bigram model to explicitly model the temporal order of event pairs. However, they all utilized a very simple representation of event as the form of (). To overcome the drawback of this event representation, Pichotta and Mooney @cite_14 presented an approach that employed events with multiple arguments.
{ "cite_N": [ "@cite_14", "@cite_4", "@cite_17" ], "mid": [ "2145374219", "2000900121", "2142269169" ], "abstract": [ "Scripts represent knowledge of stereotypical event sequences that can aid text understanding. Initial statistical methods have been developed to learn probabilistic scripts from raw text corpora; however, they utilize a very impoverished representation of events, consisting of a verb and one dependent argument. We present a script learning approach that employs events with multiple arguments. Unlike previous work, we model the interactions between multiple entities in a script. Experiments on a large corpus using the task of inferring held-out events (the “narrative cloze evaluation”) demonstrate that modeling multi-argument events improves predictive accuracy.", "For both people and machines, each in their own way, there is a serious problem in common of making sense out of what they hear, see, or are told about the world. The conceptual apparatus necessary to perform even a partial feat of understanding is formidable and fascinating. Our analysis of this apparatus is what this book is about. —Roger C. Schank and Robert P. Abelson from the Introduction (http: www.psypress.com scripts-plans-goals-and-understanding-9780898591385)", "This paper describes a natural language system which improves its own performance through learning. The system processes short English narratives and is able to acquire, from a single narrative, a new schema for a stereotypical set of actions. During the understanding process, the system attempts to construct explanations for characters' actions in terms of the goals their actions were meant to achieve. When the system observes that a character has achieved an interesting goal in a novel way, it generalizes the set of actions they used to achieve this goal into a new schema. The generalization process is a knowledge-based analysis of the causal structure of the narrative which removes unnecessary details while maintaining the validity of the causal explanation. The resulting generalized set of actions is then stored as a new schema and used by the system to correctly process narratives which were previously beyond its capabilities." ] }
1907.07826
2959681520
Detecting emotions from text is an extension of simple sentiment polarity detection. Instead of considering only positive or negative sentiments, emotions are conveyed using more tangible manner; thus, they can be expressed as many shades of gray. This paper manifests the results of our experimentation for fine-grained emotion analysis on Bangla text. We gathered and annotated a text corpus consisting of user comments from several Facebook groups regarding socio-economic and political issues, and we made efforts to extract the basic emotions (sadness, happiness, disgust, surprise, fear, anger) conveyed through these comments. Finally, we compared the results of the five most popular classical machine learning techniques namely Naive Bayes, Decision Tree, k-Nearest Neighbor (k-NN), Support Vector Machine (SVM) and K-Means Clustering with several combinations of features. Our best model (SVM with a non-linear radial-basis function (RBF) kernel) achieved an overall average accuracy score of 52.98 and an F1 score (macro) of 0.3324
In a different paper @cite_0 , the authors described the preparation of the Bengali WordNet Affect containing six types of emotion words. They employed an automatic method of sense disambiguation. The Bengali WordNet Affect could be useful for emotion-related language processing tasks in Bengali.
{ "cite_N": [ "@cite_0" ], "mid": [ "2065586386" ], "abstract": [ "The present discussion highlights the aspects of an ongoing doctoral thesis grounded on the analysis and tracking of emotions from English and Bengali texts. Development of lexical resources and corpora meets the preliminary urgencies. The research spectrum aims to identify the evaluative emotional expressions at word, phrase, sentence, and document level granularities along with their associated holders and topics. Tracking of emotions based on topic or event was carried out by employing sense based affect scoring techniques. The labeled emotion corpora are being prepared from unlabeled examples to cope with the scarcity of emotional resources, especially for the resource constraint language like Bengali. Different unsupervised, supervised and semi-supervised strategies, adopted for coloring each outline of the research spectrum produce satisfactory outcomes" ] }
1907.07826
2959681520
Detecting emotions from text is an extension of simple sentiment polarity detection. Instead of considering only positive or negative sentiments, emotions are conveyed using more tangible manner; thus, they can be expressed as many shades of gray. This paper manifests the results of our experimentation for fine-grained emotion analysis on Bangla text. We gathered and annotated a text corpus consisting of user comments from several Facebook groups regarding socio-economic and political issues, and we made efforts to extract the basic emotions (sadness, happiness, disgust, surprise, fear, anger) conveyed through these comments. Finally, we compared the results of the five most popular classical machine learning techniques namely Naive Bayes, Decision Tree, k-Nearest Neighbor (k-NN), Support Vector Machine (SVM) and K-Means Clustering with several combinations of features. Our best model (SVM with a non-linear radial-basis function (RBF) kernel) achieved an overall average accuracy score of 52.98 and an F1 score (macro) of 0.3324
On a case study for Bengali @cite_4 , the authors considered 1,100 sentences on eight different topics. They prepared a knowledge base for emoticons and also employed a morphological analyzer to identify the lexical keywords from the Bengali WordNet Affect lists. They claimed an overall precision, recall and F1-Score (micro) of , and respectively.
{ "cite_N": [ "@cite_4" ], "mid": [ "193025648" ], "abstract": [ "Rapid growth of blogs in the Web 2.0 and the handshaking between multilingual search and sentiment analysis motivate us to develop a blog based emotion analysis system for Bengali. The present paper describes the identification, visualization and tracking of bloggers' emotions with respect to time from Bengali blog documents. A simple pre-processing technique has been employed to retrieve and store the bloggers' comments on specific topics. The assignment of Ekman's six basic emotions to the bloggers' comments is carried out at word, sentence and paragraph level granularities using the Bengali WordNet AffectLists. The evaluation produces the precision, recall and F-Score of 59.36 , 64.98 and 62.17 respectively for 1100 emotional comments retrieved from 20 blog documents. Each of the bloggers' emotions with respect to different timestamps is visualized by an emotion graph. The emotion graphs of 20 bloggers demonstrate that the system performs satisfactorily in case of emotion tracking." ] }
1907.07885
2960548256
We introduce a formal framework for analyzing trades in financial markets. An exchange is where multiple buyers and sellers participate to trade. These days, all big exchanges use computer algorithms that implement double sided auctions to match buy and sell requests and these algorithms must abide by certain regulatory guidelines. For example, market regulators enforce that a matching produced by exchanges should be , and . To verify these properties of trades, we first formally define these notions in a theorem prover and then give formal proofs of relevant results on matchings. Finally, we use this framework to verify properties of two important classes of double sided auctions. All the definitions and results presented in this paper are completely formalised in the Coq proof assistant without adding any additional axioms to it.
There is no prior work known to us which formalizes financial algorithms used by the exchanges. Passmore and Ignatovich in @cite_17 highlight the significance, opportunities and challenges involved in formalizing financial markets. Their work describes in detail the whole spectrum of financial algorithms that need to be verified for ensuring safe and fair markets. Matching algorithms used by the exchanges are at the core of this whole spectrum.
{ "cite_N": [ "@cite_17" ], "mid": [ "2735447866" ], "abstract": [ "Many deep issues plaguing today’s financial markets are symptoms of a fundamental problem: The complexity of algorithms underlying modern finance has significantly outpaced the power of traditional tools used to design and regulate them. At Aesthetic Integration, we have pioneered the use of formal verification for analysing the safety and fairness of financial algorithms. With a focus on financial infrastructure (e.g., the matching logics of exchanges and dark pools and FIX connectivity between trading systems), we describe the landscape, and illustrate our Imandra formal verification system on a number of real-world examples. We sketch many open problems and future directions along the way." ] }
1907.07885
2960548256
We introduce a formal framework for analyzing trades in financial markets. An exchange is where multiple buyers and sellers participate to trade. These days, all big exchanges use computer algorithms that implement double sided auctions to match buy and sell requests and these algorithms must abide by certain regulatory guidelines. For example, market regulators enforce that a matching produced by exchanges should be , and . To verify these properties of trades, we first formally define these notions in a theorem prover and then give formal proofs of relevant results on matchings. Finally, we use this framework to verify properties of two important classes of double sided auctions. All the definitions and results presented in this paper are completely formalised in the Coq proof assistant without adding any additional axioms to it.
On the other hand, there are quite a few works formalizing various concepts from auction theory @cite_15 @cite_6 @cite_7 . Most of these works focus on the Vickrey auction mechanism. In Vickrey auction, there is a single seller with different items and multiple buyers with valuations for every subsets of items. Each buyer places bids for every combination of the items. At the end of bidding, the aim of seller is to maximise total value of the items by suitably assigning the items to the buyers.
{ "cite_N": [ "@cite_15", "@cite_7", "@cite_6" ], "mid": [ "2059840483", "1796295358", "1521892088" ], "abstract": [ "We introduce formal methods' of mechanized reasoning from computer science to address two problems in auction design and practice: is a given auction design soundly specified, possessing its intended properties; and, is the design faithfully implemented when actually run? Failure on either front can be hugely costly in large auctions. In the familiar setting of the combinatorial Vickrey auction, we use a mechanized reasoner, Isabelle, to first ensure that the auction has a set of desired properties (e.g. allocating all items at non-negative prices), and to then generate verified executable code directly from the specified design. Having established the expected results in a known context, we intend next to use formal methods to verify new auction designs.", "We are interested in finding algorithms which will allow an agent roaming between different electronic auction institutions to automatically verify the game-theoretic properties of a previously unseen auction protocol. A property may be that the protocol is robust to collusion or deception or that a given strategy is optimal. Model checking provides an automatic way of carrying out such proofs. However it may suffer from state space explosion for large models. To improve the performance of model checking, abstractions were used along with the Spin model checker. We considered two case studies: the Vickrey auction and a tractable combinatorial auction. Numerical results showed the limits of relying solely on Spin . To reduce the state space required by Spin , two property-preserving abstraction methods were applied: the first is the classical program slicing technique, which removes irrelevant variables with respect to the property; the second replaces large data, possibly infinite values of variables with smaller abstract values. This enabled us to model check the strategy-proofness property of the Vickrey auction for unbounded bid range and number of agents.", "Novel auction schemes are constantly being designed. Their design has significant consequences for the allocation of goods and the revenues generated. But how to tell whether a new design has the desired properties, such as efficiency, i.e. allocating goods to those bidders who value them most? We say: by formal, machine-checked proofs. We investigated the suitability of the Isabelle, Theorema, Mizar, and Hets CASL TPTP theorem provers for reproducing a key result of auction theory: Vickrey's 1961 theorem on the properties of second-price auctions. Based on our formalisation experience, taking an auction designer's perspective, we give recommendations on what system to use for formalising auctions, and outline further steps towards a complete auction theory toolbox." ] }
1907.07729
2961115932
We consider the problem of rigid registration, where we wish to jointly register multiple point sets via rigid transforms. This arises in applications such as sensor network localization, multiview registration, and protein structure determination. The least-squares estimator for this problem can be reduced to a rank-constrained semidefinite program (REG-SDP). It was recently shown that by formally applying the alternating direction method of multipliers (ADMM), we can derive an iterative solver (REG-ADMM) for REG-SDP, wherein each subproblem admits a simple closed-form solution. The empirical success of REG-ADMM has been demonstrated for multiview registration. However, its convergence does not follow from the existing literature on nonconvex ADMM. In this work, we study the convergence of REG-ADMM and our main findings are as follows. We prove that any fixed point of REG-ADMM is a stationary (KKT) point of REG-SDP. Moreover, for clean measurements, we give an explicit formula for the ADMM parameter @math , for which REG-ADMM is guaranteed to converge to the global optimum (with arbitrary initialization). If the noise is low, we can still show that the iterates converge to the global optimum, provided they are initialized sufficiently close to the optimum. On the other hand, if the noise is high, we explain why REG-ADMM becomes unstable if @math is less than some threshold, irrespective of the initialization. We present simulation results to support our theoretical predictions. The novelty of our analysis lies in the fact that we exploit the notion of tightness of convex relaxation to arrive at our convergence results.
The rank-restricted subset @math of the PSD cone is nonconvex, which implies that standard convergence result for ADMM @cite_27 does not directly apply to . However, we do leverage the convergence of convex ADMM for analyzing the convergence of when the noise is low. A phase transition phenomena similar to the one cited above has been analyzed in @cite_6 , albeit in the context of phase synchronization. Our proof of existence of the phase transition for is in the spirit of this analysis.
{ "cite_N": [ "@cite_27", "@cite_6" ], "mid": [ "2164278908", "2161367205" ], "abstract": [ "Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for l1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.", "Maximum likelihood estimation problems are, in general, intractable optimization problems. As a result, it is common to approximate the maximum likelihood estimator (MLE) using convex relaxations. In some cases, the relaxation is tight: it recovers the true MLE. Most tightness proofs only apply to situations where the MLE exactly recovers a planted solution (known to the analyst). It is then sufficient to establish that the optimality conditions hold at the planted signal. In this paper, we study an estimation problem (angular synchronization) for which the MLE is not a simple function of the planted solution, yet for which the convex relaxation is tight. To establish tightness in this context, the proof is less direct because the point at which to verify optimality conditions is not known explicitly. Angular synchronization consists in estimating a collection of n phases, given noisy measurements of the pairwise relative phases. The MLE for angular synchronization is the solution of a (hard) non-bipartite Grothendieck problem over the complex numbers. We consider a stochastic model for the data: a planted signal (that is, a ground truth set of phases) is corrupted with non-adversarial random noise. Even though the MLE does not coincide with the planted signal, we show that the classical semidefinite relaxation for it is tight, with high probability. This holds even for high levels of noise." ] }
1907.07729
2961115932
We consider the problem of rigid registration, where we wish to jointly register multiple point sets via rigid transforms. This arises in applications such as sensor network localization, multiview registration, and protein structure determination. The least-squares estimator for this problem can be reduced to a rank-constrained semidefinite program (REG-SDP). It was recently shown that by formally applying the alternating direction method of multipliers (ADMM), we can derive an iterative solver (REG-ADMM) for REG-SDP, wherein each subproblem admits a simple closed-form solution. The empirical success of REG-ADMM has been demonstrated for multiview registration. However, its convergence does not follow from the existing literature on nonconvex ADMM. In this work, we study the convergence of REG-ADMM and our main findings are as follows. We prove that any fixed point of REG-ADMM is a stationary (KKT) point of REG-SDP. Moreover, for clean measurements, we give an explicit formula for the ADMM parameter @math , for which REG-ADMM is guaranteed to converge to the global optimum (with arbitrary initialization). If the noise is low, we can still show that the iterates converge to the global optimum, provided they are initialized sufficiently close to the optimum. On the other hand, if the noise is high, we explain why REG-ADMM becomes unstable if @math is less than some threshold, irrespective of the initialization. We present simulation results to support our theoretical predictions. The novelty of our analysis lies in the fact that we exploit the notion of tightness of convex relaxation to arrive at our convergence results.
The theoretical convergence of ADMM for nonconvex problems has been studied in @cite_17 @cite_0 @cite_7 . However, a crucial working assumption common to these results does not hold in our case. More precisely, observe that we can rewrite as where @math is the indicator function associated with a feasible set @math @cite_27 , namely, @math if @math , and @math otherwise. Notice that, because of the indicator functions, the objective function in is non-differentiable in @math and @math . This violates a regularity assumption common in existing analyses of nonconvex ADMM, namely, that the objective must be smooth in one variable. In these works, convergence results are obtained by proving a monotonic decrease in the augmented Lagrangian. This requires: (i) bounding successive difference in dual variables by successive difference in primal variable, which is where the assumption of smoothness is used; (ii) requiring that the parameter @math is above a certain threshold. In particular, it is not clear whether this thresholding of the value of @math is fundamental to convergence, or just an artifact of the analysis.
{ "cite_N": [ "@cite_0", "@cite_27", "@cite_7", "@cite_17" ], "mid": [ "2155723734", "2164278908", "2962853966", "2295652899" ], "abstract": [ "Aiming at solving large-scale optimization problems, this paper studies distributed optimization methods based on the alternating direction method of multipliers (ADMM). By formulating the optimization problem as a consensus problem, the ADMM can be used to solve the consensus problem in a fully parallel fashion over a computer network with a star topology. However, traditional synchronized computation does not scale well with the problem size, as the speed of the algorithm is limited by the slowest workers. This is particularly true in a heterogeneous network where the computing nodes experience different computation and communication delays. In this paper, we propose an asynchronous distributed ADMM (AD-ADMM), which can effectively improve the time efficiency of distributed optimization. Our main interest lies in analyzing the convergence conditions of the AD-ADMM, under the popular partially asynchronous model, which is defined based on a maximum tolerable delay of the network. Specifically, by considering general and possibly non-convex cost functions, we show that the AD-ADMM is guaranteed to converge to the set of Karush–Kuhn–Tucker (KKT) points as long as the algorithm parameters are chosen appropriately according to the network delay. We further illustrate that the asynchrony of the ADMM has to be handled with care, as slightly modifying the implementation of the AD-ADMM can jeopardize the algorithm convergence, even under the standard convex setting.", "Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for l1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.", "In this paper, we analyze the convergence of the alternating direction method of multipliers (ADMM) for minimizing a nonconvex and possibly nonsmooth objective function, ( (x_0, ,x_p,y) ), subject to coupled linear equality constraints. Our ADMM updates each of the primal variables (x_0, ,x_p,y ), followed by updating the dual variable. We separate the variable y from (x_i )’s as it has a special role in our analysis. The developed convergence guarantee covers a variety of nonconvex functions such as piecewise linear functions, ( _q ) quasi-norm, Schatten-q quasi-norm ( (0<q<1 )), minimax concave penalty (MCP), and smoothly clipped absolute deviation penalty. It also allows nonconvex constraints such as compact manifolds (e.g., spherical, Stiefel, and Grassman manifolds) and linear complementarity constraints. Also, the (x_0 )-block can be almost any lower semi-continuous function. By applying our analysis, we show, for the first time, that several ADMM algorithms applied to solve nonconvex models in statistical learning, optimization on manifold, and matrix decomposition are guaranteed to converge. Our results provide sufficient conditions for ADMM to converge on (convex or nonconvex) monotropic programs with three or more blocks, as they are special cases of our model. ADMM has been regarded as a variant to the augmented Lagrangian method (ALM). We present a simple example to illustrate how ADMM converges but ALM diverges with bounded penalty parameter ( ). Indicated by this example and other analysis in this paper, ADMM might be a better choice than ALM for some nonconvex nonsmooth problems, because ADMM is not only easier to implement, it is also more likely to converge for the concerned scenarios.", "The alternating direction method of multipliers (ADMM) is widely used to solve large-scale linearly constrained optimization problems, convex or nonconvex, in many engineering fields. However there is a general lack of theoretical understanding of the algorithm when the objective function is nonconvex. In this paper we analyze the convergence of the ADMM for solving certain nonconvex consensus and sharing problems. We show that the classical ADMM converges to the set of stationary solutions, provided that the penalty parameter in the augmented Lagrangian is chosen to be sufficiently large. For the sharing problems, we show that the ADMM is convergent regardless of the number of variable blocks. Our analysis does not impose any assumptions on the iterates generated by the algorithm and is broadly applicable to many ADMM variants involving proximal update rules and various flexible block selection rules." ] }
1907.07729
2961115932
We consider the problem of rigid registration, where we wish to jointly register multiple point sets via rigid transforms. This arises in applications such as sensor network localization, multiview registration, and protein structure determination. The least-squares estimator for this problem can be reduced to a rank-constrained semidefinite program (REG-SDP). It was recently shown that by formally applying the alternating direction method of multipliers (ADMM), we can derive an iterative solver (REG-ADMM) for REG-SDP, wherein each subproblem admits a simple closed-form solution. The empirical success of REG-ADMM has been demonstrated for multiview registration. However, its convergence does not follow from the existing literature on nonconvex ADMM. In this work, we study the convergence of REG-ADMM and our main findings are as follows. We prove that any fixed point of REG-ADMM is a stationary (KKT) point of REG-SDP. Moreover, for clean measurements, we give an explicit formula for the ADMM parameter @math , for which REG-ADMM is guaranteed to converge to the global optimum (with arbitrary initialization). If the noise is low, we can still show that the iterates converge to the global optimum, provided they are initialized sufficiently close to the optimum. On the other hand, if the noise is high, we explain why REG-ADMM becomes unstable if @math is less than some threshold, irrespective of the initialization. We present simulation results to support our theoretical predictions. The novelty of our analysis lies in the fact that we exploit the notion of tightness of convex relaxation to arrive at our convergence results.
We do not make such smoothness assumptions in our analysis. We can afford to do this since we are analyzing a special class of problems, as opposed to the more general setups in @cite_17 @cite_0 @cite_7 . Instead of showing a monotonic decrease in the augmented Lagrangian, our analysis relies on the phenomenon of tightness of convex relaxation. This provides more insights into the convergence behavior of the algorithm. For instance, our explanation in Section shows that the instability of the algorithm (in the high noise regime) for low values of @math is fundamental, while suggesting why this instability is not observed in the low noise regime.
{ "cite_N": [ "@cite_0", "@cite_7", "@cite_17" ], "mid": [ "2155723734", "2962853966", "2295652899" ], "abstract": [ "Aiming at solving large-scale optimization problems, this paper studies distributed optimization methods based on the alternating direction method of multipliers (ADMM). By formulating the optimization problem as a consensus problem, the ADMM can be used to solve the consensus problem in a fully parallel fashion over a computer network with a star topology. However, traditional synchronized computation does not scale well with the problem size, as the speed of the algorithm is limited by the slowest workers. This is particularly true in a heterogeneous network where the computing nodes experience different computation and communication delays. In this paper, we propose an asynchronous distributed ADMM (AD-ADMM), which can effectively improve the time efficiency of distributed optimization. Our main interest lies in analyzing the convergence conditions of the AD-ADMM, under the popular partially asynchronous model, which is defined based on a maximum tolerable delay of the network. Specifically, by considering general and possibly non-convex cost functions, we show that the AD-ADMM is guaranteed to converge to the set of Karush–Kuhn–Tucker (KKT) points as long as the algorithm parameters are chosen appropriately according to the network delay. We further illustrate that the asynchrony of the ADMM has to be handled with care, as slightly modifying the implementation of the AD-ADMM can jeopardize the algorithm convergence, even under the standard convex setting.", "In this paper, we analyze the convergence of the alternating direction method of multipliers (ADMM) for minimizing a nonconvex and possibly nonsmooth objective function, ( (x_0, ,x_p,y) ), subject to coupled linear equality constraints. Our ADMM updates each of the primal variables (x_0, ,x_p,y ), followed by updating the dual variable. We separate the variable y from (x_i )’s as it has a special role in our analysis. The developed convergence guarantee covers a variety of nonconvex functions such as piecewise linear functions, ( _q ) quasi-norm, Schatten-q quasi-norm ( (0<q<1 )), minimax concave penalty (MCP), and smoothly clipped absolute deviation penalty. It also allows nonconvex constraints such as compact manifolds (e.g., spherical, Stiefel, and Grassman manifolds) and linear complementarity constraints. Also, the (x_0 )-block can be almost any lower semi-continuous function. By applying our analysis, we show, for the first time, that several ADMM algorithms applied to solve nonconvex models in statistical learning, optimization on manifold, and matrix decomposition are guaranteed to converge. Our results provide sufficient conditions for ADMM to converge on (convex or nonconvex) monotropic programs with three or more blocks, as they are special cases of our model. ADMM has been regarded as a variant to the augmented Lagrangian method (ALM). We present a simple example to illustrate how ADMM converges but ALM diverges with bounded penalty parameter ( ). Indicated by this example and other analysis in this paper, ADMM might be a better choice than ALM for some nonconvex nonsmooth problems, because ADMM is not only easier to implement, it is also more likely to converge for the concerned scenarios.", "The alternating direction method of multipliers (ADMM) is widely used to solve large-scale linearly constrained optimization problems, convex or nonconvex, in many engineering fields. However there is a general lack of theoretical understanding of the algorithm when the objective function is nonconvex. In this paper we analyze the convergence of the ADMM for solving certain nonconvex consensus and sharing problems. We show that the classical ADMM converges to the set of stationary solutions, provided that the penalty parameter in the augmented Lagrangian is chosen to be sufficiently large. For the sharing problems, we show that the ADMM is convergent regardless of the number of variable blocks. Our analysis does not impose any assumptions on the iterates generated by the algorithm and is broadly applicable to many ADMM variants involving proximal update rules and various flexible block selection rules." ] }
1907.07803
2959891429
One problem when studying how to find and fix syntax errors is how to get natural and representative examples of syntax errors. Most syntax error datasets are not free, open, and public, or they are extracted from novice programmers and do not represent syntax errors that the general population of developers would make. Programmers of all skill levels post questions and answers to Stack Overflow which may contain snippets of source code along with corresponding text and tags. Many snippets do not parse, thus they are ripe for forming a corpus of syntax errors and corrections. Our primary contribution is an approach for extracting natural syntax errors and their corresponding human made fixes to help syntax error research. A Python abstract syntax tree parser is used to determine preliminary errors and corrections on code blocks extracted from the SOTorrent data set. We further analyzed our code by executing the corrections in a Python interpreter. We applied our methodology to produce a public data set of 62,965 Python Stack Overflow code snippets with corresponding tags, errors, and stack traces. We found that errors made by Stack Overflow users do not match errors made by student developers or random mutations, implying there is a serious representativeness risk within the field. Finally we share our dataset openly so that future researchers can re-use and extend our syntax errors and fixes.
Syntactically incorrect code is artificially derivable, as formal programming languages provide grammar rules which can be referred to for correctness. Random token level insertions, deletions, and replacements were performed to generate syntax errors from existing open source Java projects @cite_2 . 10.7287 peerj.preprints.1132v1 created Python syntax errors from valid code mined from GitHub by applying mutations on tokens, characters, and lines @cite_13 . Although generated errors are appealing due to the availability of open source code, Just:2014:MVS:2635868.2635929 demonstrated limitations of using mutations for software testing research @cite_10 . Given the task of using mutants as replacements for real faults in automated software testing research, only 73 , when accounting for code coverage, the mutant to fault coupling effect is small @cite_10 .
{ "cite_N": [ "@cite_10", "@cite_13", "@cite_2" ], "mid": [ "2135937266", "2281672363", "2060384944" ], "abstract": [ "A good test suite is one that detects real faults. Because the set of faults in a program is usually unknowable, this definition is not useful to practitioners who are creating test suites, nor to researchers who are creating and evaluating tools that generate test suites. In place of real faults, testing research often uses mutants, which are artificial faults -- each one a simple syntactic variation -- that are systematically seeded throughout the program under test. Mutation analysis is appealing because large numbers of mutants can be automatically-generated and used to compensate for low quantities or the absence of known real faults. Unfortunately, there is little experimental evidence to support the use of mutants as a replacement for real faults. This paper investigates whether mutants are indeed a valid substitute for real faults, i.e., whether a test suite’s ability to detect mutants is correlated with its ability to detect real faults that developers have fixed. Unlike prior studies, these investigations also explicitly consider the conflating effects of code coverage on the mutant detection rate. Our experiments used 357 real faults in 5 open-source applications that comprise a total of 321,000 lines of code. Furthermore, our experiments used both developer-written and automatically-generated test suites. The results show a statistically significant correlation between mutant detection and real fault detection, independently of code coverage. The results also give concrete suggestions on how to improve mutation analysis and reveal some inherent limitations.", "", "A frustrating aspect of software development is that compiler error messages often fail to locate the actual cause of a syntax error. An errant semicolon or brace can result in many errors reported throughout the file. We seek to find the actual source of these syntax errors by relying on the consistency of software: valid source code is usually repetitive and unsurprising. We exploit this consistency by constructing a simple N-gram language model of lexed source code tokens. We implemented an automatic Java syntax-error locator using the corpus of the project itself and evaluated its performance on mutated source code from several projects. Our tool, trained on the past versions of a project, can effectively augment the syntax error locations produced by the native compiler. Thus we provide a methodology and tool that exploits the naturalness of software source code to detect syntax errors alongside the parser." ] }
1907.07803
2959891429
One problem when studying how to find and fix syntax errors is how to get natural and representative examples of syntax errors. Most syntax error datasets are not free, open, and public, or they are extracted from novice programmers and do not represent syntax errors that the general population of developers would make. Programmers of all skill levels post questions and answers to Stack Overflow which may contain snippets of source code along with corresponding text and tags. Many snippets do not parse, thus they are ripe for forming a corpus of syntax errors and corrections. Our primary contribution is an approach for extracting natural syntax errors and their corresponding human made fixes to help syntax error research. A Python abstract syntax tree parser is used to determine preliminary errors and corrections on code blocks extracted from the SOTorrent data set. We further analyzed our code by executing the corrections in a Python interpreter. We applied our methodology to produce a public data set of 62,965 Python Stack Overflow code snippets with corresponding tags, errors, and stack traces. We found that errors made by Stack Overflow users do not match errors made by student developers or random mutations, implying there is a serious representativeness risk within the field. Finally we share our dataset openly so that future researchers can re-use and extend our syntax errors and fixes.
Automated source code repair, like identifying and refactoring improper method names, also required a labeled dataset of valid and invalid source code @cite_0 . Program repair is often viewed as different than syntax error correction because testing is performed which serves as a benchmark for repaired code, while syntax errors rely primarily on parseability.
{ "cite_N": [ "@cite_0" ], "mid": [ "2943748428" ], "abstract": [ "To ensure code readability and facilitate software maintenance, program methods must be named properly. In particular, method names must be consistent with the corresponding method implementations. Debugging method names remains an important topic in the literature, where various approaches analyze commonalities among method names in a large dataset to detect inconsistent method names and suggest better ones. We note that the state-of-the-art does not analyze the implemented code itself to assess consistency. We thus propose a novel automated approach to debugging method names based on the analysis of consistency between method names and method code. The approach leverages deep feature representation techniques adapted to the nature of each artifact. Experimental results on over 2.1 million Java methods show that we can achieve up to 15 percentage points improvement over the state-of-the-art, establishing a record performance of 67.9 F1- measure in identifying inconsistent method names. We further demonstrate that our approach yields up to 25 accuracy in suggesting full names, while the state-of-the-art lags far behind at 1.1 accuracy. Finally, we report on our success in fixing 66 inconsistent method names in a live study on projects in the wild." ] }
1907.07803
2959891429
One problem when studying how to find and fix syntax errors is how to get natural and representative examples of syntax errors. Most syntax error datasets are not free, open, and public, or they are extracted from novice programmers and do not represent syntax errors that the general population of developers would make. Programmers of all skill levels post questions and answers to Stack Overflow which may contain snippets of source code along with corresponding text and tags. Many snippets do not parse, thus they are ripe for forming a corpus of syntax errors and corrections. Our primary contribution is an approach for extracting natural syntax errors and their corresponding human made fixes to help syntax error research. A Python abstract syntax tree parser is used to determine preliminary errors and corrections on code blocks extracted from the SOTorrent data set. We further analyzed our code by executing the corrections in a Python interpreter. We applied our methodology to produce a public data set of 62,965 Python Stack Overflow code snippets with corresponding tags, errors, and stack traces. We found that errors made by Stack Overflow users do not match errors made by student developers or random mutations, implying there is a serious representativeness risk within the field. Finally we share our dataset openly so that future researchers can re-use and extend our syntax errors and fixes.
Free and open datasets of naturally made errors and their fixes are more difficult to obtain. Blackbox, a data collection project within the BlueJ Java development environment, requires manual staff contact for access to data and forbids the release of the raw dataset @cite_11 . Pritchard:2015:FDE:2846680.2846681 analyzed Python programs submitted to CS Circles, an online tool for learning Python @cite_12 . kelley2018system studied Python code submitted by students in an introductory programming course at MIT @cite_4 . Gathering this data without privileged access to the provided code submissions is difficult, limiting the reproducibility of their research. Our research used Stack Overflow and is advantageous as the raw content is freely accessible to the internet, revisions and history is tracked, and contributors have a wide range of software engineering expertise and skill sets Stack Overflow 2019 Survey https: insights.stackoverflow.com survey 2019 https: insights.stackoverflow.com survey 2019 .
{ "cite_N": [ "@cite_4", "@cite_12", "@cite_11" ], "mid": [ "", "2963074569", "2057833160" ], "abstract": [ "", "Which programming error messages are the most common? We investigate this question, motivated by writing error explanations for novices. We consider large data sets in Python and Java that include both syntax and run-time errors. In both data sets, after grouping essentially identical messages, the error message frequencies empirically resemble Zipf-Mandelbrot distributions. We use a maximum-likelihood approach to fit the distribution parameters. This gives one possible way to contrast languages or compilers quantitatively.", "Automatically observing and recording the programming behaviour of novices is an established computing education research technique. However, prior studies have been conducted at a single institution on a small or medium scale, without the possibility of data re-use. Now, the widespread availability of always-on Internet access allows for data collection at a much larger, global scale. In this paper we report on the Blackbox project, begun in June 2013. Blackbox is a perpetual data collection project that collects data from worldwide users of the BlueJ IDE -- a programming environment designed for novice programmers. Over one hundred thousand users have already opted-in to Blackbox. The collected data is anonymous and is available to other researchers for use in their own studies, thus benefitting the larger research community. In this paper, we describe the data available via Blackbox, show some examples of analyses that can be performed using the collected data, and discuss some of the analysis challenges that lie ahead." ] }
1907.07769
2959758584
We present a voice conversion solution using recurrent sequence to sequence modeling for DNNs. Our solution takes advantage of recent advances in attention based modeling in the fields of Neural Machine Translation (NMT), Text-to-Speech (TTS) and Automatic Speech Recognition (ASR). The problem consists of converting between voices in a parallel setting when @math audio pairs are available. Our seq2seq architecture makes use of a hierarchical encoder to summarize input audio frames. On the decoder side, we use an attention based architecture used in recent TTS works. Since there is a dearth of large multispeaker voice conversion databases needed for training DNNs, we resort to training the network with a large single speaker dataset as an autoencoder. This is then adapted for the smaller multispeaker voice conversion datasets available for voice conversion. In contrast with other voice conversion works that use @math , duration and linguistic features, our system uses mel spectrograms as the audio representation. Output mel frames are converted back to audio using a wavenet vocoder.
Pertinent to our discussion are seq2seq modeling works @cite_42 @cite_16 . In these works, additional loss terms are introduced to encourage the model to learn alignment and to preserve linguistic context. Alignment is maintained by noting that the attention curve is predominantly diagonal (in the voice conversion problem) between source and target, and including in the loss function a diagonal penalty matrix - a term referred to as guided attention in the TTS work @cite_2 . An additional consideration is to prevent the decoder from 'losing' linguistic context, as would arise when it simply learns to reconstruct the output of the target. This was addressed by using additional neural networks that ensure that the hidden representation produced by the encoder (similar reasoning applies to the decoder) was capable of reconstructing the input, and thereby retained context information. These manifest as additional loss terms - we also glean a similarity to cycle consistency losses @cite_20 - that they call 'context preservation losses'. Also noteworthy is that these approaches use non-recurrent architectures for their seq2seq modeling.
{ "cite_N": [ "@cite_42", "@cite_20", "@cite_16", "@cite_2" ], "mid": [ "2950429209", "2962793481", "", "2962834036" ], "abstract": [ "This paper describes a method based on a sequence-to-sequence learning (Seq2Seq) with attention and context preservation mechanism for voice conversion (VC) tasks. Seq2Seq has been outstanding at numerous tasks involving sequence modeling such as speech synthesis and recognition, machine translation, and image captioning. In contrast to current VC techniques, our method 1) stabilizes and accelerates the training procedure by considering guided attention and proposed context preservation losses, 2) allows not only spectral envelopes but also fundamental frequency contours and durations of speech to be converted, 3) requires no context information such as phoneme labels, and 4) requires no time-aligned source and target speech data in advance. In our experiment, the proposed VC framework can be trained in only one day, using only one GPU of an NVIDIA Tesla K80, while the quality of the synthesized speech is higher than that of speech converted by Gaussian mixture model-based VC and is comparable to that of speech generated by recurrent neural network-based text-to-speech synthesis, which can be regarded as an upper limit on VC performance.", "Image-to-image translation is a class of vision and graphics problems where the goal is to learn the mapping between an input image and an output image using a training set of aligned image pairs. However, for many tasks, paired training data will not be available. We present an approach for learning to translate an image from a source domain X to a target domain Y in the absence of paired examples. Our goal is to learn a mapping G : X → Y such that the distribution of images from G(X) is indistinguishable from the distribution Y using an adversarial loss. Because this mapping is highly under-constrained, we couple it with an inverse mapping F : Y → X and introduce a cycle consistency loss to push F(G(X)) ≈ X (and vice versa). Qualitative results are presented on several tasks where paired training data does not exist, including collection style transfer, object transfiguration, season transfer, photo enhancement, etc. Quantitative comparisons against several prior methods demonstrate the superiority of our approach.", "", "This paper describes a novel text-to-speech (TTS) technique based on deep convolutional neural networks (CNN), without use of any recurrent units. Recurrent neural networks (RNN) have become a standard technique to model sequential data recently, and this technique has been used in some cutting-edge neural TTS techniques. However, training RNN components often requires a very powerful computer, or a very long time, typically several days or weeks. Recent other studies, on the other hand, have shown that CNN-based sequence synthesis can be much faster than RNN-based techniques, because of high parallelizability. The objective of this paper is to show that an alternative neural TTS based only on CNN alleviate these economic costs of training. In our experiment, the proposed Deep Convolutional TTS was sufficiently trained overnight (15 hours), using an ordinary gaming PC equipped with two GPUs, while the quality of the synthesized speech was almost acceptable." ] }
1907.07769
2959758584
We present a voice conversion solution using recurrent sequence to sequence modeling for DNNs. Our solution takes advantage of recent advances in attention based modeling in the fields of Neural Machine Translation (NMT), Text-to-Speech (TTS) and Automatic Speech Recognition (ASR). The problem consists of converting between voices in a parallel setting when @math audio pairs are available. Our seq2seq architecture makes use of a hierarchical encoder to summarize input audio frames. On the decoder side, we use an attention based architecture used in recent TTS works. Since there is a dearth of large multispeaker voice conversion databases needed for training DNNs, we resort to training the network with a large single speaker dataset as an autoencoder. This is then adapted for the smaller multispeaker voice conversion datasets available for voice conversion. In contrast with other voice conversion works that use @math , duration and linguistic features, our system uses mel spectrograms as the audio representation. Output mel frames are converted back to audio using a wavenet vocoder.
Developments in the generative modeling (primarily, Variational Autoencoders @cite_5 and Generative Adversarial Networks @cite_21 ) front have led to their use in voice conversion problems. In @cite_11 , a learned similarity metric obtained through a GAN discriminator is used to correct oversmoothed speech that results from maximum likelihood training, which imposes a particular form for the loss function (usually the MSE). A conditional VAEGAN @cite_35 setup is used in @cite_19 to implement voice conversion, with conditioning on speakers, together with a Wasserstein GAN discriminator @cite_31 to fix the blurriness issue associated with VAEs. Moreover, an important apparatus that is of use in training non-parallel voice setups consists of Cycle Consistency Losses from the famous CycleGAN @cite_20 work for images. This forms a building block in the papers @cite_51 and @cite_39 .
{ "cite_N": [ "@cite_35", "@cite_21", "@cite_39", "@cite_19", "@cite_5", "@cite_31", "@cite_51", "@cite_20", "@cite_11" ], "mid": [ "2202109488", "1710476689", "2805669069", "2608338293", "", "", "2774848319", "2962793481", "2747744257" ], "abstract": [ "We present an autoencoder that leverages learned representations to better measure similarities in data space. By combining a variational autoencoder with a generative adversarial network we can use learned feature representations in the GAN discriminator as basis for the VAE reconstruction objective. Thereby, we replace element-wise errors with feature-wise errors to better capture the data distribution while offering invariance towards e.g. translation. We apply our method to images of faces and show that it outperforms VAEs with element-wise similarity measures in terms of visual fidelity. Moreover, we show that the method learns an embedding in which high-level abstract visual features (e.g. wearing glasses) can be modified using simple arithmetic.", "For many AI projects, deep learning techniques are increasingly being used as the building blocks for innovative solutions ranging from image classification to object detection, image segmentation, image similarity, and text analytics (e.g., sentiment analysis, key phrase extraction). GANs, first introduced by (2014), are emerging as a powerful new approach toward teaching computers how to do complex tasks through a generative process. As noted by Yann LeCun (at http: bit.ly LeCunGANs ), GANs are truly the “coolest idea in machine learning in the last 20 years.”", "This paper proposes a method that allows non-parallel many-to-many voice conversion (VC) by using a variant of a generative adversarial network (GAN) called StarGAN. Our method, which we call StarGAN-VC, is noteworthy in that it (1) requires no parallel utterances, transcriptions, or time alignment procedures for speech generator training, (2) simultaneously learns many-to-many mappings across different attribute domains using a single generator network, (3) is able to generate converted speech signals quickly enough to allow real-time implementations and (4) requires only several minutes of training examples to generate reasonably realistic-sounding speech. Subjective evaluation experiments on a non-parallel many-to-many speaker identity conversion task revealed that the proposed method obtained higher sound quality and speaker similarity than a state-of-the-art method based on variational autoencoding GANs.", "Building a voice conversion (VC) system from non-parallel speech corpora is challenging but highly valuable in real application scenarios. In most situations, the source and the target speakers do not repeat the same texts or they may even speak different languages. In this case, one possible, although indirect, solution is to build a generative model for speech. Generative models focus on explaining the observations with latent variables instead of learning a pairwise transformation function, thereby bypassing the requirement of speech frame alignment. In this paper, we propose a non-parallel VC framework with a variational autoencoding Wasserstein generative adversarial network (VAW-GAN) that explicitly considers a VC objective when building the speech model. Experimental results corroborate the capability of our framework for building a VC system from unaligned data, and demonstrate improved conversion quality.", "", "", "We propose a parallel-data-free voice-conversion (VC) method that can learn a mapping from source to target speech without relying on parallel data. The proposed method is general purpose, high quality, and parallel-data free and works without any extra data, modules, or alignment procedure. It also avoids over-smoothing, which occurs in many conventional statistical model-based VC methods. Our method, called CycleGAN-VC, uses a cycle-consistent adversarial network (CycleGAN) with gated convolutional neural networks (CNNs) and an identity-mapping loss. A CycleGAN learns forward and inverse mappings simultaneously using adversarial and cycle-consistency losses. This makes it possible to find an optimal pseudo pair from unpaired data. Furthermore, the adversarial loss contributes to reducing over-smoothing of the converted feature sequence. We configure a CycleGAN with gated CNNs and train it with an identity-mapping loss. This allows the mapping function to capture sequential and hierarchical structures while preserving linguistic information. We evaluated our method on a parallel-data-free VC task. An objective evaluation showed that the converted feature sequence was near natural in terms of global variance and modulation spectra. A subjective evaluation showed that the quality of the converted speech was comparable to that obtained with a Gaussian mixture model-based method under advantageous conditions with parallel and twice the amount of data.", "Image-to-image translation is a class of vision and graphics problems where the goal is to learn the mapping between an input image and an output image using a training set of aligned image pairs. However, for many tasks, paired training data will not be available. We present an approach for learning to translate an image from a source domain X to a target domain Y in the absence of paired examples. Our goal is to learn a mapping G : X → Y such that the distribution of images from G(X) is indistinguishable from the distribution Y using an adversarial loss. Because this mapping is highly under-constrained, we couple it with an inverse mapping F : Y → X and introduce a cycle consistency loss to push F(G(X)) ≈ X (and vice versa). Qualitative results are presented on several tasks where paired training data does not exist, including collection style transfer, object transfiguration, season transfer, photo enhancement, etc. Quantitative comparisons against several prior methods demonstrate the superiority of our approach.", "" ] }
1907.07769
2959758584
We present a voice conversion solution using recurrent sequence to sequence modeling for DNNs. Our solution takes advantage of recent advances in attention based modeling in the fields of Neural Machine Translation (NMT), Text-to-Speech (TTS) and Automatic Speech Recognition (ASR). The problem consists of converting between voices in a parallel setting when @math audio pairs are available. Our seq2seq architecture makes use of a hierarchical encoder to summarize input audio frames. On the decoder side, we use an attention based architecture used in recent TTS works. Since there is a dearth of large multispeaker voice conversion databases needed for training DNNs, we resort to training the network with a large single speaker dataset as an autoencoder. This is then adapted for the smaller multispeaker voice conversion datasets available for voice conversion. In contrast with other voice conversion works that use @math , duration and linguistic features, our system uses mel spectrograms as the audio representation. Output mel frames are converted back to audio using a wavenet vocoder.
Our work is influenced by recent TTS works involving transfer learning and speaker adaptation. The recently published work @cite_49 demonstrates a methodology to use adapt a trained network for new speakers with a wavenet. Likewise, in @cite_8 , a speaker embedding is extracted using a discriminative network for unseen, new speakers which is then used to condition a TTS pipeline similar to Tacotron. This philosophy is also used in @cite_15 where schemes are used to learn speaker embeddings extracted separately or trained as part of the model during adaptation. In all these contexts, it is emphasized that the onus is on adapting to small, limited data corpuses, thereby circumventing the need to obtain large datasets to train these models from scratch. In our work, we use the same idea to get around the problem of not having enough data to train in the voice conversion dataset under consideration. However, in our work, instead of producing new speaker embeddings, we retrain the model for each new @math pair, a process that is rapid owing to the small size of the corpus.
{ "cite_N": [ "@cite_15", "@cite_49", "@cite_8" ], "mid": [ "2788357188", "2963912924", "2963432880" ], "abstract": [ "Voice cloning is a highly desired feature for personalized speech interfaces. Neural network based speech synthesis has been shown to generate high quality speech for a large number of speakers. In this paper, we introduce a neural voice cloning system that takes a few audio samples as input. We study two approaches: speaker adaptation and speaker encoding. Speaker adaptation is based on fine-tuning a multi-speaker generative model with a few cloning samples. Speaker encoding is based on training a separate model to directly infer a new speaker embedding from cloning audios and to be used with a multi-speaker generative model. In terms of naturalness of the speech and its similarity to original speaker, both approaches can achieve good performance, even with very few cloning audios. While speaker adaptation can achieve better naturalness and similarity, the cloning time or required memory for the speaker encoding approach is significantly less, making it favorable for low-resource deployment.", "", "We describe a neural network-based system for text-to-speech (TTS) synthesis that is able to generate speech audio in the voice of many different speakers, including those unseen during training. Our system consists of three independently trained components: (1) a speaker encoder network, trained on a speaker verification task using an independent dataset of noisy speech from thousands of speakers without transcripts, to generate a fixed-dimensional embedding vector from seconds of reference speech from a target speaker; (2) a sequence-to-sequence synthesis network based on Tacotron 2, which generates a mel spectrogram from text, conditioned on the speaker embedding; (3) an auto-regressive WaveNet-based vocoder that converts the mel spectrogram into a sequence of time domain waveform samples. We demonstrate that the proposed model is able to transfer the knowledge of speaker variability learned by the discriminatively-trained speaker encoder to the new task, and is able to synthesize natural speech from speakers that were not seen during training. We quantify the importance of training the speaker encoder on a large and diverse speaker set in order to obtain the best generalization performance. Finally, we show that randomly sampled speaker embeddings can be used to synthesize speech in the voice of novel speakers dissimilar from those used in training, indicating that the model has learned a high quality speaker representation." ] }
1907.08038
2956579178
We propose a novel algorithm to ensure @math -differential privacy for answering range queries on trajectory data. In order to guarantee privacy, differential privacy mechanisms add noise to either data or query, thus introducing errors to queries made and potentially decreasing the utility of information. In contrast to the state-of-the-art, our method achieves significantly lower error as it is the first data- and query-aware approach for such queries. The key challenge for answering range queries on trajectory data privately is to ensure an accurate count. Simply representing a trajectory as a set instead of of points will generally lead to highly inaccurate query answers as it ignores the sequential dependency of location points in trajectories, i.e., will violate the consistency of trajectory data. Furthermore, trajectories are generally unevenly distributed across a city and adding noise uniformly will generally lead to a poor utility. To achieve differential privacy, our algorithm adaptively adds noise to the input data according to the given query set. It first privately partitions the data space into uniform regions and computes the traffic density of each region. The regions and their densities, in addition to the given query set, are then used to estimate the distribution of trajectories over the queried space, which ensures high accuracy for the given query set. We show the accuracy and efficiency of our algorithm using extensive empirical evaluations on real and synthetic data sets.
@cite_20 define the dependency between cells instead of points by mapping the trajectories to a grid to count the movement frequencies between the adjacent cells. However, a frequency vector only maintains the number of transitions for a group of observations without information about the spatial adjacency of two vectors. This information is crucial for a range query as it counts the vectors overlapping the query area. In our work, we use a spatial histogram that is a grid but captures both the cell counts and the spatial adjacency of trajectories.
{ "cite_N": [ "@cite_20" ], "mid": [ "47444322" ], "abstract": [ "We propose a novel approach to privacy-preserving analytical processing within a distributed setting, and tackle the problem of obtaining aggregated information about vehicle traffic in a city from movement data collected by individual vehicles and shipped to a central server. Movement data are sensitive because people’s whereabouts have the potential to reveal intimate personal traits, such as religious or sexual preferences, and may allow re-identification of individuals in a database. We provide a privacy-preserving framework for movement data aggregation based on trajectory generalization in a distributed environment. The proposed solution, based on the differential privacy model and on sketching techniques for efficient data compression, provides a formal data protection safeguard. Using real-life data, we demonstrate the effectiveness of our approach also in terms of data utility preserved by the data transformation." ] }
1907.08038
2956579178
We propose a novel algorithm to ensure @math -differential privacy for answering range queries on trajectory data. In order to guarantee privacy, differential privacy mechanisms add noise to either data or query, thus introducing errors to queries made and potentially decreasing the utility of information. In contrast to the state-of-the-art, our method achieves significantly lower error as it is the first data- and query-aware approach for such queries. The key challenge for answering range queries on trajectory data privately is to ensure an accurate count. Simply representing a trajectory as a set instead of of points will generally lead to highly inaccurate query answers as it ignores the sequential dependency of location points in trajectories, i.e., will violate the consistency of trajectory data. Furthermore, trajectories are generally unevenly distributed across a city and adding noise uniformly will generally lead to a poor utility. To achieve differential privacy, our algorithm adaptively adds noise to the input data according to the given query set. It first privately partitions the data space into uniform regions and computes the traffic density of each region. The regions and their densities, in addition to the given query set, are then used to estimate the distribution of trajectories over the queried space, which ensures high accuracy for the given query set. We show the accuracy and efficiency of our algorithm using extensive empirical evaluations on real and synthetic data sets.
Recently, @cite_13 developed a mechanism named Private Spatial Histogram for range queries on trajectories. publishes a synthetic spatial histogram under @math -differential privacy. It is a query-aware mechanism that extends the idea of in @cite_3 . , takes a spatial histogram and a query set as input and utilizes the correlation between the queries to estimate the distribution of the original histogram privately. To maintain the consistency in the histogram, uses a approach that may result in a histogram far from the original spatial histogram. The reason lies in the approach that locally ensures consistency and may lead to overcorrection. In this paper, we propose a data- and query- aware mechanism that utilizes the trajectories density in different regions as well as the given queries correlation to estimate the optimal spatial histogram with significantly higher utility. Our query-aware strategy employs a linear programming approach to provide a guarantee on optimally consistent histogram which leads to significant improvement in the utility of results.
{ "cite_N": [ "@cite_13", "@cite_3" ], "mid": [ "2859661414", "2170204206" ], "abstract": [ "Studying trajectories of individuals has received growing interest. The aggregated movement behaviour of people provides important insights about their habits, interests, and lifestyles. Understanding and utilizing trajectory data is a crucial part of many applications such as location based services, urban planning, and traffic monitoring systems. Spatial histograms and spatial range queries are key components in such applications to efficiently store and answer queries on trajectory data. A spatial histogram maintains the sequentiality of location points in a trajectory by a strong sequential dependency among histogram cells. This dependency is an essential property in answering spatial range queries. However, the trajectories of individuals are unique and even aggregating them in spatial histograms cannot completely ensure an individual's privacy. A key technique to ensure privacy for data publishing ϵ-differential privacy as it provides a strong guarantee on an individual's provided data. Our work is the first that guarantees ϵ-differential privacy for spatial histograms on trajectories, while ensuring the sequentiality of trajectory data, i.e., its consistency. Consistency is key for any database and our proposed mechanism, PriSH, synthesizes a spatial histogram and ensures the consistency of published histogram with respect to the strong dependency constraint. In extensive experiments on real and synthetic datasets, we show that (1) PriSH is highly scalable with the dataset size and granularity of the space decomposition, (2) the distribution of aggregate trajectory information in the synthesized histogram accurately preserves the distribution of original histogram, and (3) the output has high accuracy in answering arbitrary spatial range queries.", "We present a new algorithm for differentially private data release, based on a simple combination of the Multiplicative Weights update rule with the Exponential Mechanism. Our MWEM algorithm achieves what are the best known and nearly optimal theoretical guarantees, while at the same time being simple to implement and experimentally more accurate on actual data sets than existing techniques." ] }
1907.07723
2959678280
We study the problem of repeated play in a zero-sum game in which the payoff matrix may change, in a possibly adversarial fashion, on each round; we call these Online Matrix Games. Finding the Nash Equilibrium (NE) of a two player zero-sum game is core to many problems in statistics, optimization, and economics, and for a fixed game matrix this can be easily reduced to solving a linear program. But when the payoff matrix evolves over time our goal is to find a sequential algorithm that can compete with, in a certain sense, the NE of the long-term-averaged payoff matrix. We design an algorithm with small NE regret--that is, we ensure that the long-term payoff of both players is close to minimax optimum in hindsight. Our algorithm achieves near-optimal dependence with respect to the number of rounds and depends poly-logarithmically on the number of available actions of the players. Additionally, we show that the naive reduction, where each player simply minimizes its own regret, fails to achieve the stated objective regardless of which algorithm is used. We also consider the so-called bandit setting, where the feedback is significantly limited, and we provide an algorithm with small NE regret using one-point estimates of each payoff matrix.
The reader familiar with Online Convex Optimization (OCO) may find it closely related to the OMG problem. In the OCO setting, a player is given a convex, closed, and bounded action set @math , and must repeatedly choose an action @math before the convex function @math is revealed. The player's goal is to obtain sublinear defined as @math . This problem is well studied and several algorithms such as Online Gradient Descent @cite_12 , Regularized Follow the Leader @cite_22 @cite_53 and Perturbed Follow the Leader @cite_6 achieve optimal individual regret bounds that scale as @math . The most natural (although incorrect) approach to attack the OMG problem is to equip each of the players with a sublinear individual regret algorithm. However, we will show in that if both players use an algorithm that guarantees sublinear individual regret, then it is impossible to achieve sublinear NE regret when the payoff matrices are chosen adversarially. In other words, the algorithms for the OCO setting cannot be directly applied to the OMG problem considered in this paper.
{ "cite_N": [ "@cite_53", "@cite_22", "@cite_12", "@cite_6" ], "mid": [ "1508384000", "2118075434", "2148825261", "2076798318" ], "abstract": [ "We introduce an efficient algorithm for the problem of online linear optimization in the bandit setting which achieves the optimal O∗( √ T ) regret. The setting is a natural generalization of the nonstochastic multi-armed bandit problem, and the existence of an efficient optimal algorithm has been posed as an open problem in a number of recent papers. We show how the difficulties encountered by previous approaches are overcome by the use of a self-concordant potential function. Our approach presents a novel connection between online learning and interior point methods.", "Online learning is the process of answering a sequence of questions given knowledge of the correct answers to previous questions and possibly additional available information. Answering questions in an intelligent fashion and being able to make rational decisions as a result is a basic feature of everyday life. Will it rain today (so should I take an umbrella)? Should I fight the wild animal that is after me, or should I run away? Should I open an attachment in an email message or is it a virus? The study of online learning algorithms is thus an important domain in machine learning, and one that has interesting theoretical properties and practical applications. This dissertation describes a novel framework for the design and analysis of online learning algorithms. We show that various online learning algorithms can all be derived as special cases of our algorithmic framework. This unified view explains the properties of existing algorithms and also enables us to derive several new interesting algorithms. Online learning is performed in a sequence of consecutive rounds, where at each round the learner is given a question and is required to provide an answer to this question. After predicting an answer, the correct answer is revealed and the learner suffers a loss if there is a discrepancy between his answer and the correct one. The algorithmic framework for online learning we propose in this dissertation stems from a connection that we make between the notions of regret in online learning and weak duality in convex optimization. Regret bounds are the common thread in the analysis of online learning algorithms. A regret bound measures the performance of an online algorithm relative to the performance of a competing prediction mechanism, called a competing hypothesis. The competing hypothesis can be chosen in hindsight from a class of hypotheses, after observing the entire sequence of question- answer pairs. Over the years, competitive analysis techniques have been refined and extended to numerous prediction problems by employing complex and varied notions of progress toward a good competing hypothesis. We propose a new perspective on regret bounds which is based on the notion of duality in convex optimization. Regret bounds are universal in the sense that they hold for any possible fixed hypothesis in a given hypothesis class. We therefore cast the universal bound as a lower bound", "Convex programming involves a convex set F ⊆ Rn and a convex cost function c : F → R. The goal of convex programming is to find a point in F which minimizes c. In online convex programming, the convex set is known in advance, but in each step of some repeated optimization problem, one must select a point in F before seeing the cost function for that step. This can be used to model factory production, farm production, and many other industrial optimization problems where one is unaware of the value of the items produced until they have already been constructed. We introduce an algorithm for this domain. We also apply this algorithm to repeated games, and show that it is really a generalization of infinitesimal gradient ascent, and the results here imply that generalized infinitesimal gradient ascent (GIGA) is universally consistent.", "A constant rebalanced portfolio is an investment strategy that keeps the same distribution of wealth among a set of stocks from day to day. There has been much work on Cover's Universal algorithm, which is competitive with the best constant rebalanced portfolio determined in hindsight (Cover, 1991, , 1998, Blum and Kalai, 1999, Foster and Vohra, 1999, Vovk, 1998, Cover and Ordentlich, 1996a, Cover, 1996c). While this algorithm has good performance guarantees, all known implementations are exponential in the number of stocks, restricting the number of stocks used in experiments (, 1998, Cover and Ordentlich, 1996a, Ordentlich and Cover, 1996b, Cover, 1996c, Blum and Kalai, 1999). We present an efficient implementation of the Universal algorithm that is based on non-uniform random walks that are rapidly mixing (Applegate and Kannan, 1991, Lovasz and Simonovits, 1992, Frieze and Kannan, 1999). This same implementation also works for non-financial applications of the Universal algorithm, such as data compression (Cover, 1996c) and language modeling (, 1999)." ] }
1907.07723
2959678280
We study the problem of repeated play in a zero-sum game in which the payoff matrix may change, in a possibly adversarial fashion, on each round; we call these Online Matrix Games. Finding the Nash Equilibrium (NE) of a two player zero-sum game is core to many problems in statistics, optimization, and economics, and for a fixed game matrix this can be easily reduced to solving a linear program. But when the payoff matrix evolves over time our goal is to find a sequential algorithm that can compete with, in a certain sense, the NE of the long-term-averaged payoff matrix. We design an algorithm with small NE regret--that is, we ensure that the long-term payoff of both players is close to minimax optimum in hindsight. Our algorithm achieves near-optimal dependence with respect to the number of rounds and depends poly-logarithmically on the number of available actions of the players. Additionally, we show that the naive reduction, where each player simply minimizes its own regret, fails to achieve the stated objective regardless of which algorithm is used. We also consider the so-called bandit setting, where the feedback is significantly limited, and we provide an algorithm with small NE regret using one-point estimates of each payoff matrix.
Related to the OMG problem with bandit feedback is the seminal work of @cite_4 . They provide the first sublinear regret bound for Online Convex Optimization with bandit feedback, using a one-point estimate of the gradient. The one-point gradient estimate used in @cite_4 is similar to those independently proposed in @cite_35 and in @cite_44 . The regret bound provided in @cite_4 is @math , which is suboptimal. In @cite_53 , the authors give the first @math bound for the special case when the functions are linear. More recently, @cite_39 and @cite_16 designed the first efficient algorithms with @math regret for the general online convex optimization case; unfortunately, the dependence on the dimension @math in the regret rate is a very large polynomial. Our one-point matrix estimate is most closely related to the random estimator in @cite_8 for linear functions. It is possible to use the more sophisticated techniques from @cite_53 @cite_39 @cite_16 to improve our NE regret bound in section ; however, the result does not seem to be immediate and we leave this as future work.
{ "cite_N": [ "@cite_35", "@cite_4", "@cite_8", "@cite_53", "@cite_39", "@cite_44", "@cite_16" ], "mid": [ "", "2004001705", "2116067849", "1508384000", "2301614296", "2012117977", "2473549844" ], "abstract": [ "", "We study a general online convex optimization problem. We have a convex set S and an unknown sequence of cost functions c 1 , c 2 ,..., and in each period, we choose a feasible point x t in S, and learn the cost c t (x t ). If the function c t is also revealed after each period then, as Zinkevich shows in [25], gradient descent can be used on these functions to get regret bounds of O(√n). That is, after n rounds, the total cost incurred will be O(√n) more than the cost of the best single feasible decision chosen with the benefit of hindsight, min x Σ ct(x).We extend this to the \"bandit\" setting, where, in each period, only the cost c t (x t ) is revealed, and bound the expected regret as O(n3 4).Our approach uses a simple approximation of the gradient that is computed from evaluating c t at a single (random) point. We show that this biased estimate is sufficient to approximate gradient descent on the sequence of functions. In other words, it is possible to use gradient descent without seeing anything more than the value of the functions at a single point. The guarantees hold even in the most general case: online against an adaptive adversary.For the online linear optimization problem [15], algorithms with low regrets in the bandit setting have recently been given against oblivious [1] and adaptive adversaries [19]. In contrast to these algorithms, which distinguish between explicit explore and exploit periods, our algorithm can be interpreted as doing a small amount of exploration in each period.", "In the multi-armed bandit problem, a gambler must decide which arm of K non-identical slot machines to play in a sequence of trials so as to maximize his reward. This classical problem has received much attention because of the simple model it provides of the trade-off between exploration (trying out each arm to find the best one) and exploitation (playing the arm believed to give the best payoff). Past solutions for the bandit problem have almost always relied on assumptions about the statistics of the slot machines. In this work, we make no statistical assumptions whatsoever about the nature of the process generating the payoffs of the slot machines. We give a solution to the bandit problem in which an adversary, rather than a well-behaved stochastic process, has complete control over the payoffs. In a sequence of T plays, we prove that the expected per-round payoff of our algorithm approaches that of the best arm at the rate O(T sup -1 3 ), and we give an improved rate of convergence when the best arm has fairly low payoff. We also consider a setting in which the player has a team of \"experts\" advising him on which arm to play; here, we give a strategy that will guarantee expected payoff close to that of the best expert. Finally, we apply our result to the problem of learning to play an unknown repeated matrix game against an all-powerful adversary.", "We introduce an efficient algorithm for the problem of online linear optimization in the bandit setting which achieves the optimal O∗( √ T ) regret. The setting is a natural generalization of the nonstochastic multi-armed bandit problem, and the existence of an efficient optimal algorithm has been posed as an open problem in a number of recent papers. We show how the difficulties encountered by previous approaches are overcome by the use of a self-concordant potential function. Our approach presents a novel connection between online learning and interior point methods.", "We consider the problem of online convex optimization against an arbitrary adversary with bandit feedback, known as bandit convex optimization. We give the first @math -regret algorithm for this setting based on a novel application of the ellipsoid method to online learning. This bound is known to be tight up to logarithmic factors. Our analysis introduces new tools in discrete convex geometry.", "The simultaneous perturbation stochastic approximation (SPSA) algorithm has proven very effective for difficult multivariate optimization problems where it is not possible to obtain direct gradient information. As discussed to date, SPSA is based on a highly efficient gradient approximation requiring only two measurements of the loss function independent of the number of parameters being estimated. This note presents a form of SPSA that requires only one function measurement (for any dimension). Theory is presented that identifies the class of problems for which this one-measurement form will be asymptotically superior to the standard two-measurement form.", "We consider the adversarial convex bandit problem and we build the first @math -time algorithm with @math -regret for this problem. To do so we introduce three new ideas in the derivative-free optimization literature: (i) kernel methods, (ii) a generalization of Bernoulli convolutions, and (iii) a new annealing schedule for exponential weights (with increasing learning rate). The basic version of our algorithm achieves @math -regret, and we show that a simple variant of this algorithm can be run in @math -time per step at the cost of an additional @math factor in the regret. These results improve upon the @math -regret and @math -time result of the first two authors, and the @math -regret and @math -time result of Hazan and Li. Furthermore we conjecture that another variant of the algorithm could achieve @math -regret, and moreover that this regret is unimprovable (the current best lower bound being @math and it is achieved with linear functions). For the simpler situation of zeroth order stochastic convex optimization this corresponds to the conjecture that the optimal query complexity is of order @math ." ] }
1901.06026
2909735549
In crowd counting datasets, people appear at different scales, depending on their distance to the camera. To address this issue, we propose a novel multi-branch scale-aware attention network that exploits the hierarchical structure of convolutional neural networks and generates, in a single forward pass, multi-scale density predictions from different layers of the architecture. To aggregate these maps into our final prediction, we present a new soft attention mechanism that learns a set of gating masks. Furthermore, we introduce a scale-aware loss function to regularize the training of different branches and guide them to specialize on a particular scale. As this new training requires ground-truth annotations for the size of each head, we also propose a simple, yet effective technique to estimate it automatically. Finally, we present an ablation study on each of these components and compare our approach against the literature on 4 crowd counting datasets: UCF-QNRF, ShanghaiTech A & B and UCF_CC_50. Without bells and whistles, our approach achieves state-of-the-art on all these datasets. We observe a remarkable improvement on the UCF-QNRF (25 ) and a significant one on the others (around 10 ).
Attention models have been widely used for many computer vision tasks like image classification @cite_16 @cite_47 , object detection @cite_19 @cite_20 , semantic segmentation @cite_5 @cite_10 , saliency detection @cite_13 and, very recently, crowd counting @cite_30 . These models work by learning an intermediate attention map that is used to select the most relevant piece of information for visual analysis. The most similar works to ours are the ones of @cite_5 and @cite_30 . Both approaches extract multi-scale features from several resized input images and use an attention mechanism to weight the importance of each pixel of each feature map. One clear drawback of these approaches is that their inference is slow, as each test image needs to be re-sized and fed into the CNN model multiple times. Instead, our approach is much faster: it requires a single input image and a single pass through the model, as our multi-scale features are generated by pooling information from different layers of the same network instead of multiple passes through the same network.
{ "cite_N": [ "@cite_13", "@cite_30", "@cite_19", "@cite_5", "@cite_47", "@cite_16", "@cite_10", "@cite_20" ], "mid": [ "2796255296", "2804654955", "", "2158865742", "2221625691", "", "", "1689909837" ], "abstract": [ "Contexts play an important role in the saliency detection task. However, given a context region, not all contextual information is helpful for the final task. In this paper, we propose a novel pixel-wise contextual attention network, i.e., the PiCANet, to learn to selectively attend to informative context locations for each pixel. Specifically, for each pixel, it can generate an attention map in which each attention weight corresponds to the contextual relevance at each context location. An attended contextual feature can then be constructed by selectively aggregating the contextual information. We formulate the proposed PiCANet in both global and local forms to attend to global and local contexts, respectively. Both models are fully differentiable and can be embedded into CNNs for joint training. We also incorporate the proposed models with the U-Net architecture to detect salient objects. Extensive experiments show that the proposed PiCANets can consistently improve saliency detection performance. The global and local PiCANets facilitate learning global contrast and homogeneousness, respectively. As a result, our saliency model can detect salient objects more accurately and uniformly, thus performing favorably against the state-of-the-art methods.", "Because of the powerful learning capability of deep neural networks, counting performance via density map estimation has improved significantly during the past several years. However, it is still very challenging due to severe occlusion, large scale variations, and perspective distortion. Scale variations (from image to image) coupled with perspective distortion (within one image) result in huge scale changes of the object size. Earlier methods based on convolutional neural networks (CNN) typically did not handle this scale variation explicitly, until Hydra-CNN and MCNN. MCNN uses three columns, each with different filter sizes, to extract features at different scales. In this paper, in contrast to using filters of different sizes, we utilize an image pyramid to deal with scale variations. It is more effective and efficient to resize the input fed into the network, as compared to using larger filter sizes. Secondly, we adaptively fuse the predictions from different scales (using adaptively changing per-pixel weights), which makes our method adapt to scale changes within an image. The adaptive fusing is achieved by generating an across-scale attention map, which softly selects a suitable scale for each pixel, followed by a 1x1 convolution. Extensive experiments on three popular datasets show very compelling results.", "", "Incorporating multi-scale features in fully convolutional neural networks (FCNs) has been a key element to achieving state-of-the-art performance on semantic image segmentation. One common way to extract multi-scale features is to feed multiple resized input images to a shared deep network and then merge the resulting features for pixelwise classification. In this work, we propose an attention mechanism that learns to softly weight the multi-scale features at each pixel location. We adapt a state-of-the-art semantic image segmentation model, which we jointly train with multi-scale input images and the attention model. The proposed attention model not only outperforms average- and max-pooling, but allows us to diagnostically visualize the importance of features at different positions and scales. Moreover, we show that adding extra supervision to the output at each scale is essential to achieving excellent performance when merging multi-scale features. We demonstrate the effectiveness of our model with extensive experiments on three challenging datasets, including PASCAL-Person-Part, PASCAL VOC 2012 and a subset of MS-COCO 2014.", "While feedforward deep convolutional neural networks (CNNs) have been a great success in computer vision, it is important to note that the human visual cortex generally contains more feedback than feedforward connections. In this paper, we will briefly introduce the background of feedbacks in the human visual cortex, which motivates us to develop a computational feedback mechanism in deep neural networks. In addition to the feedforward inference in traditional neural networks, a feedback loop is introduced to infer the activation status of hidden layer neurons according to the \"goal\" of the network, e.g., high-level semantic labels. We analogize this mechanism as \"Look and Think Twice.\" The feedback networks help better visualize and understand how deep neural networks work, and capture visual attention on expected objects, even in images with cluttered background and multiple objects. Experiments on ImageNet dataset demonstrate its effectiveness in solving tasks such as image classification and object localization.", "", "", "We present a novel detection method using a deep convolutional neural network (CNN), named AttentionNet. We cast an object detection problem as an iterative classification problem, which is the most suitable form of a CNN. AttentionNet provides quantized weak directions pointing a target object and the ensemble of iterative predictions from AttentionNet converges to an accurate object boundary box. Since AttentionNet is a unified network for object detection, it detects objects without any separated models from the object proposal to the post bounding-box regression. We evaluate AttentionNet by a human detection task and achieve the state-of-the-art performance of 65 (AP) on PASCAL VOC 2007 2012 with an 8-layered architecture only." ] }
1901.06024
2950571912
Benchmarks of bugs are essential to empirically evaluate automatic program repair tools. In this paper, we present Bears, a project for collecting and storing bugs into an extensible bug benchmark for automatic repair studies in Java. The collection of bugs relies on commit building state from Continuous Integration (CI) to find potential pairs of buggy and patched program versions from open-source projects hosted on GitHub. Each pair of program versions passes through a pipeline where an attempt of reproducing a bug and its patch is performed. The core step of the reproduction pipeline is the execution of the test suite of the program on both program versions. If a test failure is found in the buggy program version candidate and no test failure is found in its patched program version candidate, a bug and its patch were successfully reproduced. The uniqueness of Bears is the usage of CI (builds) to identify buggy and patched program version candidates, which has been widely adopted in the last years in open-source projects. This approach allows us to collect bugs from a diversity of projects beyond mature projects that use bug tracking systems. Moreover, Bears was designed to be publicly available and to be easily extensible by the research community through automatic creation of branches with bugs in a given GitHub repository, which can be used for pull requests in the Bears repository. We present in this paper the approach employed by Bears, and we deliver the version 1.0 of Bears, which contains 251 reproducible bugs collected from 72 projects that use the Travis CI and Maven build environment.
Benchmarks of bugs are assets that have been used in software bug-related research fields to support empirical evaluations. Several benchmarks were first created for the software testing research community, such as Siemens @cite_18 and SIR @cite_7 , two notable and well-cited benchmarks. The majority of bugs in these two benchmarks were seeded in existing program versions without bugs, which is farther away from that targets real bugs.
{ "cite_N": [ "@cite_18", "@cite_7" ], "mid": [ "2134691366", "1971137495" ], "abstract": [ "This paper reports an experimental study investigating the effectiveness of two code-based test adequacy criteria for identifying sets of test cases that detect faults. The all-edges and all-DUs (modified all-uses) coverage criteria were applied to 130 faulty program versions derived from seven moderate size base programs by seeding realistic faults. We generated several thousand test sets for each faulty program and examined the relationship between fault detection and coverage. Within the limited domain of our experiments, test sets achieving coverage levels over 90 usually showed significantly better fault detection than randomly chosen test sets of the same size. In addition, significant improvements in the effectiveness of coverage-based tests usually occurred as coverage increased from 90 to 100 . However the results also indicate that 100 code coverage alone is not a reliable indicator of the effectiveness of a test set. We also found that tests based respectively on control-flow and dataflow criteria are frequency complementary in their effectiveness. >", "Where the creation, understanding, and assessment of software testing and regression testing techniques are concerned, controlled experimentation is an indispensable research methodology. Obtaining the infrastructure necessary to support such experimentation, however, is difficult and expensive. As a result, progress in experimentation with testing techniques has been slow, and empirical data on the costs and effectiveness of techniques remains relatively scarce. To help address this problem, we have been designing and constructing infrastructure to support controlled experimentation with testing and regression testing techniques. This paper reports on the challenges faced by researchers experimenting with testing techniques, including those that inform the design of our infrastructure. The paper then describes the infrastructure that we are creating in response to these challenges, and that we are now making available to other researchers, and discusses the impact that this infrastructure has had and can be expected to have." ] }
1901.06024
2950571912
Benchmarks of bugs are essential to empirically evaluate automatic program repair tools. In this paper, we present Bears, a project for collecting and storing bugs into an extensible bug benchmark for automatic repair studies in Java. The collection of bugs relies on commit building state from Continuous Integration (CI) to find potential pairs of buggy and patched program versions from open-source projects hosted on GitHub. Each pair of program versions passes through a pipeline where an attempt of reproducing a bug and its patch is performed. The core step of the reproduction pipeline is the execution of the test suite of the program on both program versions. If a test failure is found in the buggy program version candidate and no test failure is found in its patched program version candidate, a bug and its patch were successfully reproduced. The uniqueness of Bears is the usage of CI (builds) to identify buggy and patched program version candidates, which has been widely adopted in the last years in open-source projects. This approach allows us to collect bugs from a diversity of projects beyond mature projects that use bug tracking systems. Moreover, Bears was designed to be publicly available and to be easily extensible by the research community through automatic creation of branches with bugs in a given GitHub repository, which can be used for pull requests in the Bears repository. We present in this paper the approach employed by Bears, and we deliver the version 1.0 of Bears, which contains 251 reproducible bugs collected from 72 projects that use the Travis CI and Maven build environment.
To the best of our knowledge, the first benchmarks proposed for automatic program repair research are ManyBugs and IntroClass @cite_4 . ManyBugs contains 185 bugs collected from nine large, popular, open-source programs. On the other hand, IntroClass targets small programs written by novices, and contains 998 bugs collected from student-written versions of six small programming assignments in an undergraduate programming course. Both benchmarks are for the C language.
{ "cite_N": [ "@cite_4" ], "mid": [ "841012168" ], "abstract": [ "The field of automated software repair lacks a set of common benchmark problems. Although benchmark sets are used widely throughout computer science, existing benchmarks are not easily adapted to the problem of automatic defect repair, which has several special requirements. Most important of these is the need for benchmark programs with reproducible, important defects and a deterministic method for assessing if those defects have been repaired. This article details the need for a new set of benchmarks, outlines requirements, and then presents two datasets, ManyBugs and IntroClass , consisting between them of 1,183 defects in 15 C programs. Each dataset is designed to support the comparative evaluation of automatic repair algorithms asking a variety of experimental questions. The datasets have empirically defined guarantees of reproducibility and benchmark quality, and each study object is categorized to facilitate qualitative evaluation and comparisons by category of bug or program. The article presents baseline experimental results on both datasets for three existing repair methods, GenProg, AE, and TrpAutoRepair, to reduce the burden on researchers who adopt these datasets for their own comparative evaluations." ] }
1901.06024
2950571912
Benchmarks of bugs are essential to empirically evaluate automatic program repair tools. In this paper, we present Bears, a project for collecting and storing bugs into an extensible bug benchmark for automatic repair studies in Java. The collection of bugs relies on commit building state from Continuous Integration (CI) to find potential pairs of buggy and patched program versions from open-source projects hosted on GitHub. Each pair of program versions passes through a pipeline where an attempt of reproducing a bug and its patch is performed. The core step of the reproduction pipeline is the execution of the test suite of the program on both program versions. If a test failure is found in the buggy program version candidate and no test failure is found in its patched program version candidate, a bug and its patch were successfully reproduced. The uniqueness of Bears is the usage of CI (builds) to identify buggy and patched program version candidates, which has been widely adopted in the last years in open-source projects. This approach allows us to collect bugs from a diversity of projects beyond mature projects that use bug tracking systems. Moreover, Bears was designed to be publicly available and to be easily extensible by the research community through automatic creation of branches with bugs in a given GitHub repository, which can be used for pull requests in the Bears repository. We present in this paper the approach employed by Bears, and we deliver the version 1.0 of Bears, which contains 251 reproducible bugs collected from 72 projects that use the Travis CI and Maven build environment.
More recently other benchmarks were proposed for automatic program repair. Codeflaws @cite_5 contains 3,902 bugs extracted from programming contests available on Codeforces. Codeflaws is also for the C language, and the programs range from one to 322 lines of code. QuixBugs @cite_15 is a multi-lingual benchmark, which contains single line bugs from 40 programs translated to both Java and Python languages.
{ "cite_N": [ "@cite_5", "@cite_15" ], "mid": [ "2620986014", "2762550985" ], "abstract": [ "Several automated program repair techniques have been proposed to reduce the time and effort spent in bug-fixing. While these repair tools are designed to be generic such that they could address many software faults, different repair tools may fix certain types of faults more effectively than other tools. Therefore, it is important to compare more objectively the effectiveness of different repair tools on various fault types. However, existing benchmarks on automated program repairs do not allow thorough investigation of the relationship between fault types and the effectiveness of repair tools. We present Codeflaws, a set of 3902 defects from 7436 programs automatically classified across 39 defect classes (we refer to different types of fault as defect classes derived from the syntactic differences between a buggy program and a patched program).", "Recent years have seen an explosion of work in automated program repair. While previous work has focused exclusively on tools for single languages, recent work in multi-language transformation has opened the door for multi-language program repair tools. Evaluating the performance of such a tool requires having a benchmark set of similar buggy programs in different languages. We present QuixBugs, consisting of 40 programs translated to both Python and Java, each with a bug on a single line. The QuixBugs benchmark suite is based on problems from the Quixey Challenge, where programmers were given a short buggy program and 1 minute to fix the bug." ] }
1901.06024
2950571912
Benchmarks of bugs are essential to empirically evaluate automatic program repair tools. In this paper, we present Bears, a project for collecting and storing bugs into an extensible bug benchmark for automatic repair studies in Java. The collection of bugs relies on commit building state from Continuous Integration (CI) to find potential pairs of buggy and patched program versions from open-source projects hosted on GitHub. Each pair of program versions passes through a pipeline where an attempt of reproducing a bug and its patch is performed. The core step of the reproduction pipeline is the execution of the test suite of the program on both program versions. If a test failure is found in the buggy program version candidate and no test failure is found in its patched program version candidate, a bug and its patch were successfully reproduced. The uniqueness of Bears is the usage of CI (builds) to identify buggy and patched program version candidates, which has been widely adopted in the last years in open-source projects. This approach allows us to collect bugs from a diversity of projects beyond mature projects that use bug tracking systems. Moreover, Bears was designed to be publicly available and to be easily extensible by the research community through automatic creation of branches with bugs in a given GitHub repository, which can be used for pull requests in the Bears repository. We present in this paper the approach employed by Bears, and we deliver the version 1.0 of Bears, which contains 251 reproducible bugs collected from 72 projects that use the Travis CI and Maven build environment.
The closest benchmarks to are Defects4J @cite_12 and Bugs.jar @cite_0 , both for Java. Defects4J contains 395 reproducible bugs collected from six projects, and Bugs.jar contains 1,158 reproducible bugs collected from eight Apache projects. To collect bugs, the approach used for both benchmarks is based on bug tracking systems, and they contain bugs from large, mature projects. , on the other hand, was designed to collect bugs from a diversity of projects other than large and mature ones: we break the need of projects using bug tracking systems. Note that bug tracking systems are used in the direction of documenting bugs. Continuous Integration, on the other hand, is used to actually build and test a project, which is closer to the task of identifying reproducible bugs.
{ "cite_N": [ "@cite_0", "@cite_12" ], "mid": [ "2883977877", "2156723666" ], "abstract": [ "We present Bugs.jar, a large-scale dataset for research in automated debugging, patching, and testing of Java programs. Bugs.jar is comprised of 1,158 bugs and patches, drawn from 8 large, popular open-source Java projects, spanning 8 diverse and prominent application categories. It is an order of magnitude larger than Defects4J, the only other dataset in its class. We discuss the methodology used for constructing Bugs.jar, the representation of the dataset, several use-cases, and an illustration of three of the use-cases through the application of 3 specific tools on Bugs.jar, namely our own tool, E lixir , and two third-party tools, Ekstazi and JaCoCo.", "Empirical studies in software testing research may not be comparable, reproducible, or characteristic of practice. One reason is that real bugs are too infrequently used in software testing research. Extracting and reproducing real bugs is challenging and as a result hand-seeded faults or mutants are commonly used as a substitute. This paper presents Defects4J, a database and extensible framework providing real bugs to enable reproducible studies in software testing research. The initial version of Defects4J contains 357 real bugs from 5 real-world open source pro- grams. Each real bug is accompanied by a comprehensive test suite that can expose (demonstrate) that bug. Defects4J is extensible and builds on top of each program’s version con- trol system. Once a program is configured in Defects4J, new bugs can be added to the database with little or no effort. Defects4J features a framework to easily access faulty and fixed program versions and corresponding test suites. This framework also provides a high-level interface to common tasks in software testing research, making it easy to con- duct and reproduce empirical studies. Defects4J is publicly available at http: defects4j.org." ] }
1901.06144
2910986363
Accurate, nontrivial quantum operations on many qubits are experimentally challenging. As opposed to the standard approach of compiling larger unitaries into sequences of 2-qubit gates, we propose a protocol on Hamiltonian control fields which implements highly selective multi-qubit gates in a strongly-coupled many-body quantum system. We exploit the selectiveness of resonant driving to exchange only 2 out of @math eigenstates of some background Hamiltonian, and discuss a basis transformation, the eigengate, that makes this operation relevant to the computational basis. The latter has a second use as a Hahn echo which undoes the dynamical phases due to the background Hamiltonian. We find that the error of such protocols scales favourably with the gate time as @math , but the protocol becomes inefficient with a growing number of qubits N. The framework is numerically tested in the context of a spin chain model first described by Polychronakos, for which we show that an earlier solution method naturally gives rise to an eigengate. Our techniques could be of independent interest for the theory of driven many-body systems.
We previously described a very similar resonantly driven gate in Ref. @cite_12 , which was based on the so-called Krawtchouk spin chain. In the present work, we generalize many aspects of this first result, and show how the same line of reasoning applies to a very different system featuring long-range rather than just nearest-neighbor interactions.
{ "cite_N": [ "@cite_12" ], "mid": [ "2736372133" ], "abstract": [ "textabstractWe propose a strategy for engineering multiqubit quantum gates. As a first step, it employs an eigengate to map states in the computational basis to eigenstates of a suitable many-body Hamiltonian. The second step employs resonant driving to enforce a transition between a single pair of eigenstates, leaving all others unchanged. The procedure is completed by mapping back to the computational basis. We demonstrate the strategy for the case of a linear array with an even number N of qubits, with specific XX+YY couplings between nearest neighbors. For this so-called Krawtchouk chain, a two-body driving term leads to the iSWAP_N gate, which we numerically test for N = 4 and 6." ] }
1901.06144
2910986363
Accurate, nontrivial quantum operations on many qubits are experimentally challenging. As opposed to the standard approach of compiling larger unitaries into sequences of 2-qubit gates, we propose a protocol on Hamiltonian control fields which implements highly selective multi-qubit gates in a strongly-coupled many-body quantum system. We exploit the selectiveness of resonant driving to exchange only 2 out of @math eigenstates of some background Hamiltonian, and discuss a basis transformation, the eigengate, that makes this operation relevant to the computational basis. The latter has a second use as a Hahn echo which undoes the dynamical phases due to the background Hamiltonian. We find that the error of such protocols scales favourably with the gate time as @math , but the protocol becomes inefficient with a growing number of qubits N. The framework is numerically tested in the context of a spin chain model first described by Polychronakos, for which we show that an earlier solution method naturally gives rise to an eigengate. Our techniques could be of independent interest for the theory of driven many-body systems.
The most obvious competitor of our protocol is conventional compiling of any quantum operation into a universal set of single- and two-qubit gates. Extensive research efforts have greatly optmized compiling methods, and in the asymptotics of many qubits, compiling approach becomes increasingly favorable compared to our proposal. For a recent overview, see Ref. @cite_27 . We present our work not as an alternative to compiling, but rather as a creative twist to the fields of condensed matter and quantum control, which might find applications on highly specialized systems. We also present our methods, such as the eigengate presented in Sec. , as tools that may find applications elsewhere.
{ "cite_N": [ "@cite_27" ], "mid": [ "2755920271" ], "abstract": [ "To enable a quantum computer to solve practical problems more efficiently than classical computers, quantum programming languages and compilers are required to translate quantum algorithms into machine code; here the currently available software is reviewed." ] }
1901.05997
2910606695
Anecdotal evidence has emerged suggesting that state-sponsored organizations, like the Russian Internet Research Agency, have exploited mainstream social. Their primary goal is apparently to conduct information warfare operations to manipulate public opinion using accounts disguised as "normal" people. To increase engagement and credibility of their posts, these accounts regularly share images. However, the use of images by state-sponsored accounts has yet to be examined by the research community. In this work, we address this gap by analyzing a ground truth dataset of 1.8M images posted to Twitter by called Russian trolls. More specifically, we analyze the content of the images, as well as the posting activity of the accounts. Among other things, we find that image posting activity of Russian trolls is tightly coupled with real-world events, and that their targets, as well as the content shared, changed over time. When looking at the interplay between domains that shared the same images as state-sponsored trolls, we find clear cut differences in the origin and or spread of images across the Web. Overall, our findings provide new insight into how state-sponsored trolls operate, and specifically how they use imagery to achieve their goals.
Other work has studied state-sponsored accounts' behavior on, and use of, social networks. Specifically, @cite_16 analyze the advertisements purchased by Russian accounts on Facebook. By performing clustering and semantic analysis, they identify their targeted campaigns over time, concluding that their main goal is to sway division on the community, and also that the most effective campaigns share similar characteristics. @cite_46 compare a set of Russian troll accounts against a random set of Twitter users, showing that Russian troll accounts exhibit different behaviors in the use of the Twitter platform when compared to random users. In follow up work, @cite_0 analyze the activities of Russian and Iranian trolls on Twitter and Reddit. They find substantial differences between them (e.g., Russian trolls were pro-Trump, Iranian ones anti-Trump), that their behavior and targets vary greatly over time, and that Russian trolls discuss different topics across Web communities (e.g., they discuss about cryptocurrencies on Reddit but not on Twitter). Also, @cite_28 examine the exploitation of various Web platforms (e.g., social networks and search engines), showing that state-sponsored accounts use them to advance their propaganda by promoting content and their own controlled domains.
{ "cite_N": [ "@cite_0", "@cite_28", "@cite_46", "@cite_16" ], "mid": [ "2900289476", "2897648888", "2786091114", "2963635075" ], "abstract": [ "Over the past few years, extensive anecdotal evidence emerged that suggests the involvement of state-sponsored actors (or \"trolls\") in online political campaigns with the goal to manipulate public opinion and sow discord. Recently, Twitter and Reddit released ground truth data about Russian and Iranian state-sponsored actors that were active on their platforms. In this paper, we analyze these ground truth datasets across several axes to understand how these actors operate, how they evolve over time, who are their targets, how their strategies changed over time, and what is their influence to the Web's information ecosystem. Among other things we find: a) campaigns of these actors were influenced by real-world events; b) these actors were employing different tactics and had different targets over time, thus their automated detection is not straightforward; and c) Russian trolls were clearly pro-Trump, whereas Iranian trolls were anti-Trump. Finally, using Hawkes Processes, we quantified the influence that these actors had to four Web communities: Reddit, Twitter, 4chan's Politically Incorrect board ( pol ), and Gab, finding that Russian trolls were more influential than Iranians with the exception of pol .", "The Russia-based Internet Research Agency (IRA) carried out a broad information campaign in the U.S. before and after the 2016 presidential election. The organization created an expansive set of internet properties: web domains, Facebook pages, and Twitter bots, which received traffic via purchased Facebook ads, tweets, and search engines indexing their domains. We investigate the scope of IRA activities in 2017, joining data from Facebook and Twitter with logs from the Internet Explorer 11 and Edge browsers and the Bing.com search engine. The studies demonstrate both the ease with which malicious actors can harness social media and search engines for propaganda campaigns, and the ability to track and understand such activities by fusing content and activity resources from multiple internet services. We show how cross-platform analyses can provide an unprecedented lens on attempts to manipulate opinions and elections in democracies.", "Over the past couple of years, anecdotal evidence has emerged linking coordinated campaigns by state-sponsored actors with efforts to manipulate public opinion on the Web, often around major political events, through dedicated accounts, or \"trolls.\" Although they are often involved in spreading disinformation on social media, there is little understanding of how these trolls operate, what type of content they disseminate, and most importantly their influence on the information ecosystem. In this paper, we shed light on these questions by analyzing 27K tweets posted by 1K Twitter users identified as having ties with Russia's Internet Research Agency and thus likely state-sponsored trolls. We compare their behavior to a random set of Twitter users, finding interesting differences in terms of the content they disseminate, the evolution of their account, as well as their general behavior and use of the Twitter platform. Then, using a statistical model known as Hawkes Processes, we quantify the influence that these accounts had on the dissemination of news on social platforms such as Twitter, Reddit, and 4chan. Overall, our findings indicate that Russian troll accounts managed to stay active for long periods of time and to reach a substantial number of Twitter users with their messages. When looking at their ability of spreading news content and making it viral, however, we find that their effect on social platforms was minor, with the significant exception of news published by the Russian state-sponsored news outlet RT (Russia Today).", "One of the key aspects of the United States democracy is free and fair elections that allow for a peaceful transfer of power from one President to the next. The 2016 US presidential election stands out due to suspected foreign influence before, during, and after the election. A significant portion of that suspected influence was carried out via social media. In this paper, we look specifically at 3,500 Facebook ads allegedly purchased by the Russian government. These ads were released on May 10, 2018 by the US Congress House Intelligence Committee. We analyzed the ads using natural language processing techniques to determine textual and semantic features associated with the most effective ones. We clustered the ads over time into the various campaigns and the labeled parties associated with them. We also studied the effectiveness of Ads on an individual, campaign and party basis. The most effective ads tend to have less positive sentiment, focus on past events and are more specific and personalized in nature. The more effective campaigns also show such similar characteristics. The campaigns’ duration and promotion of the Ads suggest a desire to sow division rather than sway the election." ] }
1901.05997
2910606695
Anecdotal evidence has emerged suggesting that state-sponsored organizations, like the Russian Internet Research Agency, have exploited mainstream social. Their primary goal is apparently to conduct information warfare operations to manipulate public opinion using accounts disguised as "normal" people. To increase engagement and credibility of their posts, these accounts regularly share images. However, the use of images by state-sponsored accounts has yet to be examined by the research community. In this work, we address this gap by analyzing a ground truth dataset of 1.8M images posted to Twitter by called Russian trolls. More specifically, we analyze the content of the images, as well as the posting activity of the accounts. Among other things, we find that image posting activity of Russian trolls is tightly coupled with real-world events, and that their targets, as well as the content shared, changed over time. When looking at the interplay between domains that shared the same images as state-sponsored trolls, we find clear cut differences in the origin and or spread of images across the Web. Overall, our findings provide new insight into how state-sponsored trolls operate, and specifically how they use imagery to achieve their goals.
Finally, @cite_18 use machine learning to detect Twitter users that are likely to share content that originates from Russian state-sponsored accounts.
{ "cite_N": [ "@cite_18" ], "mid": [ "2950434393" ], "abstract": [ "Social media, once hailed as a vehicle for democratization and the promotion of positive social change across the globe, are under attack for becoming a tool of political manipulation and spread of disinformation. A case in point is the alleged use of trolls by Russia to spread malicious content in Western elections. This paper examines the Russian interference campaign in the 2016 US presidential election on Twitter. Our aim is twofold: first, we test whether predicting users who spread trolls' content is feasible in order to gain insight on how to contain their influence in the future; second, we identify features that are most predictive of users who either intentionally or unintentionally play a vital role in spreading this malicious content. We collected a dataset with over 43 million elections-related posts shared on Twitter between September 16 and November 9, 2016, by about 5.7 million users. This dataset includes accounts associated with the Russian trolls identified by the US Congress. Proposed models are able to very accurately identify users who spread the trolls' content (average AUC score of 96 , using 10-fold validation). We show that political ideology, bot likelihood scores, and some activity-related account meta data are the most predictive features of whether a user spreads trolls' content or not." ] }
1901.06033
2910986402
The Variational Auto-Encoder (VAE) model is a popular method to learn at once a generative model and embeddings for data living in a high-dimensional space. In the real world, many datasets may be assumed to be hierarchically structured. Traditionally, VAE uses a Euclidean latent space, but tree-like structures cannot be efficiently embedded in such spaces as opposed to hyperbolic spaces with negative curvature. We therefore endow VAE with a Poincar 'e ball model of hyperbolic geometry and derive the necessary methods to work with two main Gaussian generalisations on that space. We empirically show better generalisation to unseen data than the Euclidean counterpart, and can qualitatively and quantitatively better recover hierarchical structures.
In the BNP 's literature, explicitly modelling the hierarchical structure of data has been a long-going trend . Embedding graphs in hyperbolic spaces has been empirically shown to yield a more compact representation compared to Euclidean space, especially for low dimensions. @cite_6 studied the trade-offs of tree embeddings in the Poincar 'e disc.
{ "cite_N": [ "@cite_6" ], "mid": [ "2797520557" ], "abstract": [ "Hyperbolic embeddings offer excellent quality with few dimensions when embedding hierarchical data structures like synonym or type hierarchies. Given a tree, we give a combinatorial construction that embeds the tree in hyperbolic space with arbitrarily low distortion without using optimization. On WordNet, our combinatorial embedding obtains a mean-average-precision of 0.989 with only two dimensions, while 's recent construction obtains 0.87 using 200 dimensions. We provide upper and lower bounds that allow us to characterize the precision-dimensionality tradeoff inherent in any hyperbolic embedding. To embed general metric spaces, we propose a hyperbolic generalization of multidimensional scaling (h-MDS). We show how to perform exact recovery of hyperbolic points from distances, provide a perturbation analysis, and give a recovery result that allows us to reduce dimensionality. The h-MDS approach offers consistently low distortion even with few dimensions across several datasets. Finally, we extract lessons from the algorithms and theory above to design a PyTorch-based implementation that can handle incomplete information and is scalable." ] }
1901.06081
2911064732
Abstract This paper presents a novel iterative deep learning framework and applies it to document enhancement and binarization. Unlike the traditional methods that predict the binary label of each pixel on the input image, we train the neural network to learn the degradations in document images and produce uniform images of the degraded input images, which in turn allows the network to refine the output iteratively. Two different iterative methods have been studied in this paper: recurrent refinement (RR) that uses the same trained neural network in each iteration for document enhancement and stacked refinement (SR) that uses a stack of different neural networks for iterative output refinement. Given the learned nature of the uniform and enhanced image, the binarization map can be easily obtained through use of a global or local threshold. The experimental results on several public benchmark data sets show that our proposed method provides a new, clean version of the degraded image, one that is suitable for visualization and which shows promising results for binarization using Otsu’s global threshold, based on enhanced images learned iteratively by the neural network.
Binarization is a classical research problem for document analysis and many document binarization methods have been proposed over the past two decades in the literature. It aims to convert each pixel in a document image into either text or background. The most popular and simple method is the Otsu @cite_53 , which is a nonparametric and unsupervised method of automatic threshold selection approach for gray-scale image binarization. It selects the global threshold based on the gray-scale histogram without any priori knowledge thus the computational complexity is linear. The Otsu method works very well on uniform and clean images while produces poor results on degraded document images with nonuniform background. In order to solve this problem, local adaptive threshold methods have been proposed, such as Sauvola @cite_39 , Niblack @cite_30 , Pai @cite_31 and AdOtsu @cite_40 @cite_29 . These methods compute the local threshold for each pixel based on the local statistic information, such as the mean and standard deviation of a local area around the pixel. It should be noted that binarization is not always the goal. Methods such as Otsu can also be used for strong contrast enhancement.
{ "cite_N": [ "@cite_30", "@cite_53", "@cite_29", "@cite_39", "@cite_40", "@cite_31" ], "mid": [ "1600276254", "2133059825", "2053896220", "2128060444", "2072610689", "2067887288" ], "abstract": [ "A new and distinct spur type apple variety which originated as a limb mutation of the standard winter banana apple tree (non-patented) is provided. This new apple variety possesses a vigorous compact and only slightly spreading growth habit and can be distinguished from its parent and the Housden spur type winter banana apple variety (non-patented). More specifically, the new variety forms more fruiting spurs per unit length on two and three year old wood than the standard winter banana apple tree and less spurs per unit length than the Housden spur type winter banana apple tree. Additionally, the new variety has the ability to heavily bear fruit having a whitish-yellow skin color with a sometimes slight scarlet red blush upon maturity which is substantially identical to that of the standard winter banana apple tree and which has substantially less skin russeting than the Housden spur type winter banana apple tree.", "", "Adaptive binarization methods play a central role in document image processing. In this work, an adaptive and parameterless generalization of Otsu's method is presented. The adaptiveness is obtained by combining grid-based modeling and the estimated background map. The parameterless behavior is achieved by automatically estimating the document parameters, such as the average stroke width and the average line height. The proposed method is extended using a multiscale framework, and has been applied on various datasets, including the DIBCO'09 dataset, with promising results.", "A new method is presented for adaptive document image binarization, where the page is considered as a collection of subcomponents such as text, background and picture. The problems caused by noise, illumination and many source type-related degradations are addressed. Two new algorithms are applied to determine a local threshold for each pixel. The performance evaluation of the algorithm utilizes test images with ground-truth, evaluation metrics for binarization of textual and synthetic images, and a weight-based ranking procedure for the \"nal result presentation. The proposed algorithms were tested with images including di!erent types of document components and degradations. The results were compared with a number of known techniques in the literature. The benchmarking results show that the method adapts and performs well in each case qualitatively and quantitatively. ( 1999 Pattern Recognition Society. Published by Elsevier Science Ltd. All rights reserved.", "In this work, a multi-scale binarization framework is introduced, which can be used along with any adaptive threshold-based binarization method. This framework is able to improve the binarization results and to restore weak connections and strokes, especially in the case of degraded historical documents. This is achieved thanks to localized nature of the framework on the spatial domain. The framework requires several binarizations on different scales, which is addressed by introduction of fast grid-based models. This enables us to explore high scales which are usually unreachable to the traditional approaches. In order to expand our set of adaptive methods, an adaptive modification of Otsu's method, called AdOtsu, is introduced. In addition, in order to restore document images suffering from bleed-through degradation, we combine the framework with recursive adaptive methods. The framework shows promising performance in subjective and objective evaluations performed on available datasets.", "Document image binarization involves converting gray level images into binary images, which is a feature that has significantly impacted many portable devices in recent years, including PDAs and mobile camera phones. Given the limited memory space and the computational power of portable devices, reducing the computational complexity of an embedded system is of priority concern. This work presents an efficient document image binarization algorithm with low computational complexity and high performance. Integrating the advantages of global and local methods allows the proposed algorithm to divide the document image into several regions. A threshold surface is then constructed based on the diversity and the intensity of each region to derive the binary image. Experimental results demonstrate the effectiveness of the proposed method in providing a promising binarization outcome and low computational cost." ] }
1901.06081
2911064732
Abstract This paper presents a novel iterative deep learning framework and applies it to document enhancement and binarization. Unlike the traditional methods that predict the binary label of each pixel on the input image, we train the neural network to learn the degradations in document images and produce uniform images of the degraded input images, which in turn allows the network to refine the output iteratively. Two different iterative methods have been studied in this paper: recurrent refinement (RR) that uses the same trained neural network in each iteration for document enhancement and stacked refinement (SR) that uses a stack of different neural networks for iterative output refinement. Given the learned nature of the uniform and enhanced image, the binarization map can be easily obtained through use of a global or local threshold. The experimental results on several public benchmark data sets show that our proposed method provides a new, clean version of the degraded image, one that is suitable for visualization and which shows promising results for binarization using Otsu’s global threshold, based on enhanced images learned iteratively by the neural network.
Other priori knowledge of text is also exploit for binarization, such as the edge pixels extracted by edge detectors. For example, the Canny edge detector is used to extract edge pixels in @cite_6 and then the closed image edges are considered as seeds to find the text region. The transition pixel which is a generation of the edge pixel is introduced in @cite_28 is computed based on the intensity differences in a small neighbor regions and the statistic information of these pixels are used to compute the threshold. @cite_44 , structural symmetric pixels around strokes are used to compute the local threshold. Howe @cite_24 proposes a promising method which can tune the parameters automatically with a global energy function as a loss which incorporates edge discontinuities (Canny detector is used).
{ "cite_N": [ "@cite_28", "@cite_44", "@cite_24", "@cite_6" ], "mid": [ "2013420608", "2759766068", "2091863778", "2412298086" ], "abstract": [ "This paper introduces a novel binarization method based on the concept of transition pixel, a generalization of edge pixels. Such pixels are characterized by extreme transition values computed using pixel-intensity differences in a small neighborhood. We show how to adjust the threshold of several binary threshold methods which compute gray-intensity thresholds, using the gray-intensity mean and variance of the pixels in the transition set. Our experiments show that the new approach yields segmentation performance superior to several with current state-of-the-art binarization algorithms.", "Abstract This paper presents an effective approach for the local threshold binarization of degraded document images. We utilize the structural symmetric pixels (SSPs) to calculate the local threshold in neighborhood and the voting result of multiple thresholds will determine whether one pixel belongs to the foreground or not. The SSPs are defined as the pixels around strokes whose gradient magnitudes are large enough and orientations are symmetric opposite. The compensated gradient map is used to extract the SSP so as to weaken the influence of document degradations. To extract SSP candidates with large magnitudes and distinguish the faint characters and bleed-through background, we propose an adaptive global threshold selection algorithm. To further extract pixels with opposite orientations, an iterative stroke width estimation algorithm is applied to ensure the proper size of neighborhood used in orientation judgement. At last, we present a multiple threshold vote based framework to deal with some inaccurate detections of SSP. The experimental results on seven public document image binarization datasets show that our method is accurate and robust compared with many traditional and state-of-the-art document binarization approaches based on multiple evaluation measures.", "Document analysis systems often begin with binarization as a first processing stage. Although numerous techniques for binarization have been proposed, the results produced can vary in quality and often prove sensitive to the settings of one or more control parameters. This paper examines a promising approach to binarization based upon simple principles, and shows that its success depends most significantly upon the values of two key parameters. It further describes an automatic technique for setting these parameters in a manner that tunes them to the individual image, yielding a final binarization algorithm that can cut total error by one-third with respect to the baseline version. The results of this method advance the state of the art on recent benchmarks.", "" ] }
1901.06081
2911064732
Abstract This paper presents a novel iterative deep learning framework and applies it to document enhancement and binarization. Unlike the traditional methods that predict the binary label of each pixel on the input image, we train the neural network to learn the degradations in document images and produce uniform images of the degraded input images, which in turn allows the network to refine the output iteratively. Two different iterative methods have been studied in this paper: recurrent refinement (RR) that uses the same trained neural network in each iteration for document enhancement and stacked refinement (SR) that uses a stack of different neural networks for iterative output refinement. Given the learned nature of the uniform and enhanced image, the binarization map can be easily obtained through use of a global or local threshold. The experimental results on several public benchmark data sets show that our proposed method provides a new, clean version of the degraded image, one that is suitable for visualization and which shows promising results for binarization using Otsu’s global threshold, based on enhanced images learned iteratively by the neural network.
Convolutional neural networks achieve good performance on various applications, which is also applied in document analysis. For example, the winner of the recent DIBCO event @cite_23 uses the U-Net convolutional network architecture for accurate pixel classification. @cite_20 , the fully convolutional neural network is applied at multiple image scales. The deep encoder-decoder architecture is used for binarization in @cite_18 @cite_36 . A hierarchical deep supervised network is proposed in @cite_10 for document binarization, which achieves start-of-the-art performance on several benchmark data sets. @cite_16 , the Grid Long Short-Term Memory (Grid LSTM) network is used for binarization. However, it achieves lower performance than Vo's method @cite_10 .
{ "cite_N": [ "@cite_18", "@cite_36", "@cite_23", "@cite_16", "@cite_10", "@cite_20" ], "mid": [ "2787190313", "2731900166", "2785874433", "2810569655", "2751352153", "2964010994" ], "abstract": [ "Document image binarization is one of the critical initial steps for document analysis and understanding. Previous work mostly focused on exploiting hand-crafted features to build statistical models for distinguishing text from background. However, these approaches only achieved limited success because: (a) the effectiveness of hand-crafted features is limited by the researcher's domain knowledge and understanding on the documents, and (b) a universal model cannot always capture the complexity of different document degradations. In order to address these challenges, we propose a convolutional encoder-decoder model with deep learning for document image binarization in this paper. In the proposed method, mid-level document image representations are learnt by a stack of convolutional layers, which compose the encoder in this architecture. Then the binarization image is obtained by mapping low resolution representations to the original size through the decoder, which is composed by a series of transposed convolutional layers. We compare the proposed binarization method with other binarization algorithms both qualitatively and quantitatively on the public dataset. The experimental results show that the proposed method has comparable performance to the other hand-crafted binarization approaches and has more generalization capabilities with limited in-domain training data.", "Abstract Binarization plays a key role in the automatic information retrieval from document images. This process is usually performed in the first stages of document analysis systems, and serves as a basis for subsequent steps. Hence it has to be robust in order to allow the full analysis workflow to be successful. Several methods for document image binarization have been proposed so far, most of which are based on hand-crafted image processing strategies. Recently, Convolutional Neural Networks have shown an amazing performance in many disparate duties related to computer vision. In this paper we discuss the use of convolutional auto-encoders devoted to learning an end-to-end map from an input image to its selectional output, in which activations indicate the likelihood of pixels to be either foreground or background. Once trained, documents can therefore be binarized by parsing them through the model and applying a global threshold. This approach has proven to outperform existing binarization strategies in a number of document types.", "DIBCO 2017 is the international Competition on Document Image Binarization organized in conjunction with the ICDAR 2017 conference. The general objective of the contest is to identify current advances in document image binarization of machine-printed and handwritten document images using performance evaluation measures that are motivated by document image analysis and recognition requirements. This paper describes the competition details including the evaluation measures used as well as the performance of the 26 submitted methods along with a brief description of each method.", "In the context of document image analysis, image binarization is an important preprocessing step for other document analysis algorithms, but also relevant on its own by improving the readability of images of historical documents. While historical document image binarization is challenging due to common image degradations, such as bleedthrough, faded ink or stains, achieving good binarization performance in a timely manner is a worthwhile goal to facilitate efficient information extraction from historical documents. In this paper, we propose a recurrent neural network based algorithm using Grid Long Short-Term Memory cells for image binarization, as well as a pseudo F-Measure based weighted loss function. We evaluate the binarization and execution performance of our algorithm for different choices of footprint size, scale factor and loss function. Our experiments show a significant trade-off between binarization time and quality for different footprint sizes. However, we see no statistically significant difference when using different scale factors and only limited differences for different loss functions. Lastly, we compare the binarization performance of our approach with the best performing algorithm in the 2016 handwritten document image binarization contest and show that both algorithms perform equally well.", "Abstract The binarization of degraded document images is a challenging problem in terms of document analysis. Binarization is a classification process in which intra-image pixels are assigned to either of the two following classes: foreground text and background. Most of the algorithms are constructed on low-level features in an unsupervised manner, and the consequent disenabling of full utilization of input-domain knowledge considerably limits distinguishing of background noises from the foreground. In this paper, a novel supervised-binarization method is proposed, in which a hierarchical deep supervised network (DSN) architecture is learned for the prediction of the text pixels at different feature levels. With higher-level features, the network can differentiate text pixels from background noises, whereby severe degradations that occur in document images can be managed. Alternatively, foreground maps that are predicted at lower-level features present a higher visual quality at the boundary area. Compared with those of traditional algorithms, binary images generated by our architecture have cleaner background and better-preserved strokes. The proposed approach achieves state-of-the-art results over widely used DIBCO datasets, revealing the robustness of the presented method.", "Binarization of degraded historical manuscript images is an important pre-processing step for many document processing tasks. We formulate binarization as a pixel classification learning task and apply a novel Fully Convolutional Network (FCN) architecture that operates at multiple image scales, including full resolution. The FCN is trained to optimize a continuous version of the Pseudo F-measure metric and an ensemble of FCNs outperform the competition winners on 4 of 7 DIBCO competitions. This same binarization technique can also be applied to different domains such as Palm Leaf Manuscripts with good performance. We analyze the performance of the proposed model w.r.t. the architectural hyperparameters, size and diversity of training data, and the input features chosen." ] }
1901.06237
2909579777
Due to concerns about human error in crowdsourcing, it is standard practice to collect labels for the same data point from multiple internet workers. We here show that the resulting budget can be used more effectively with a flexible worker assignment strategy that asks fewer workers to analyze easy-to-label data and more workers to analyze data that requires extra scrutiny. Our main contribution is to show how the allocations of the number of workers to a task can be computed optimally based on task features alone, without using worker profiles. Our target tasks are delineating cells in microscopy images and analyzing the sentiment toward the 2016 U.S. presidential candidates in tweets. We first propose an algorithm that computes budget-optimized crowd worker allocation (BUOCA). We next train a machine learning system (BUOCA-ML) that predicts an optimal number of crowd workers needed to maximize the accuracy of the labeling. We show that the computed allocation can yield large savings in the crowdsourcing budget (up to 49 percent points) while maintaining labeling accuracy. Finally, we envisage a human-machine system for performing budget-optimized data analysis at a scale beyond the feasibility of crowdsourcing.
Related Crowdsourcing Methodologies. Balancing the demands that accuracy requirements and budget limits place on crowdsourcing experiments has been the focus of research in various communities, including machine learning , human computation , data management , and computer vision . The crowdsourcing mechanisms used in practice, e.g., collecting image labels to train computer vision systems, are typically agnostic to the difficulty of a task, assigning the same fixed number of crowd workers to each task. Notable exceptions are the recent works by @cite_10 , @cite_0 , and @cite_2 , who proposed flexible worker assignment schemes.
{ "cite_N": [ "@cite_0", "@cite_10", "@cite_2" ], "mid": [ "2072514260", "2963573141", "2610525153" ], "abstract": [ "An increasing number of studies in political communication focus on the “sentiment” or “tone” of news content, political speeches, or advertisements. This growing interest in measuring sentiment coincides with a dramatic increase in the volume of digitized information. Computer automation has a great deal of potential in this new media environment. The objective here is to outline and validate a new automated measurement instrument for sentiment analysis in political texts. Our instrument uses a dictionary-based approach consisting of a simple word count of the frequency of keywords in a text from a predefined dictionary. The design of the freely available Lexicoder Sentiment Dictionary (LSD) is discussed in detail here. The dictionary is tested against a body of human-coded news content, and the resulting codes are also compared to results from nine existing content-analytic dictionaries. Analyses suggest that the LSD produces results that are more systematically related to human coding than are results ...", "We propose the ambiguity problem for the foreground object segmentation task and motivate the importance of estimating and accounting for this ambiguity when designing vision systems. Specifically, we distinguish between images which lead multiple annotators to segment different foreground objects (ambiguous) versus minor inter-annotator differences of the same object. Taking images from eight widely used datasets, we crowdsource labeling the images as “ambiguous” or “not ambiguous” to segment in order to construct a new dataset we call STATIC. Using STATIC, we develop a system that automatically predicts which images are ambiguous. Experiments demonstrate the advantage of our prediction system over existing saliency-based methods on images from vision benchmarks and images taken by blind people who are trying to recognize objects in their environment. Finally, we introduce a crowdsourcing system to achieve cost savings for collecting the diversity of all valid “ground truth” foreground object segmentations by collecting extra segmentations only when ambiguity is expected. Experiments show our system eliminates up to 47 of human effort compared to existing crowdsourcing methods with no loss in capturing the diversity of ground truths.", "Visual question answering systems empower users to ask any question about any image and receive a valid answer. However, existing systems do not yet account for the fact that a visual question can lead to a single answer or multiple different answers. While a crowd often agrees, disagreements do arise for many reasons including that visual questions are ambiguous, subjective, or difficult. We propose a model, CrowdVerge, for automatically predicting from a visual question whether a crowd would agree on one answer. We then propose how to exploit these predictions in a novel application to efficiently collect all valid answers to visual questions. Specifically, we solicit fewer human responses when answer agreement is expected and more human responses otherwise. Experiments on 121,811 visual questions asked by sighted and blind people show that, compared to existing crowdsourcing systems, our system captures the same answer diversity with typically 14-23 less crowd involvement." ] }
1901.06237
2909579777
Due to concerns about human error in crowdsourcing, it is standard practice to collect labels for the same data point from multiple internet workers. We here show that the resulting budget can be used more effectively with a flexible worker assignment strategy that asks fewer workers to analyze easy-to-label data and more workers to analyze data that requires extra scrutiny. Our main contribution is to show how the allocations of the number of workers to a task can be computed optimally based on task features alone, without using worker profiles. Our target tasks are delineating cells in microscopy images and analyzing the sentiment toward the 2016 U.S. presidential candidates in tweets. We first propose an algorithm that computes budget-optimized crowd worker allocation (BUOCA). We next train a machine learning system (BUOCA-ML) that predicts an optimal number of crowd workers needed to maximize the accuracy of the labeling. We show that the computed allocation can yield large savings in the crowdsourcing budget (up to 49 percent points) while maintaining labeling accuracy. Finally, we envisage a human-machine system for performing budget-optimized data analysis at a scale beyond the feasibility of crowdsourcing.
Our work is different from previously-proposed crowdsourcing methodologies with adaptive worker assignments because these assume that the same workers can be employed with user profile tracking.'' The worker-task allocation scheme by @cite_7 relies on being able to incrementally estimate [the workers' accuracy] based on their previous work.'' The algorithm by @cite_3 relies on a majority-voting-efficient fusion method to estimate the answers to each of the tasks,'' which also requires user profile tracking. Our methodology does not include user profile tracking because in our experiments using the Amazon Mechanical Turk Internet marketplace, we cannot request the same workers in an incremental scheme to estimate the accuracy of their work. Our work makes use of the optimality of majority voting (MV) under certain conditions (Theorem 2). @cite_8 also point out that to use MV, the probability of correct labeling of each worker should be higher than 0.5.
{ "cite_N": [ "@cite_3", "@cite_7", "@cite_8" ], "mid": [ "2149906572", "2096848877", "2129345386" ], "abstract": [ "In this paper we address the problem of budget allocation for redundantly crowdsourcing a set of classification tasks where a key challenge is to find a trade-off between the total cost and the accuracy of estimation. We propose CrowdBudget, an agent-based budget allocation algorithm, that efficiently divides a given budget among different tasks in order to achieve low estimation error. In particular, we prove that CrowdBudget can achieve at most max 0, K 2- O,(√B) estimation error with high probability, where K is the number of tasks and B is the budget size. This result significantly outperforms the current best theoretical guarantee from ,. In addition, we demonstrate that our algorithm outperforms existing methods by up to 40 in experiments based on real-world data from a prominent database of crowdsourced classification responses.", "Crowd-sourcing is a recent framework in which human intelligence tasks are outsourced to a crowd of unknown people (\"workers\") as an open call (e.g., on Amazon's Mechanical Turk). Crowd-sourcing has become immensely popular with hoards of employers (\"requesters\"), who use it to solve a wide variety of jobs, such as dictation transcription, content screening, etc. In order to achieve quality results, requesters often subdivide a large task into a chain of bite-sized subtasks that are combined into a complex, iterative workflow in which workers check and improve each other's results. This paper raises an exciting question for AI — could an autonomous agent control these workflows without human intervention, yielding better results than today's state of the art, a fixed control program? We describe a planner, TURKONTROL, that formulates workflow control as a decision-theoretic optimization problem, trading off the implicit quality of a solution artifact against the cost for workers to achieve it. We lay the mathematical framework to govern the various decisions at each point in a popular class of workflows. Based on our analysis we implement the workflow control algorithm and present experiments demonstrating that TURKONTROL obtains much higher utilities than popular fixed policies.", "Crowdsourcing has become a popular paradigm for labeling large datasets. However, it has given rise to the computational task of aggregating the crowdsourced labels provided by a collection of unreliable annotators. We approach this problem by transforming it into a standard inference problem in graphical models, and applying approximate variational methods, including belief propagation (BP) and mean field (MF). We show that our BP algorithm generalizes both majority voting and a recent algorithm by [1], while our MF method is closely related to a commonly used EM algorithm. In both cases, we find that the performance of the algorithms critically depends on the choice of a prior distribution on the workers' reliability; by choosing the prior properly, both BP and MF (and EM) perform surprisingly well on both simulated and real-world datasets, competitive with state-of-the-art algorithms based on more complicated modeling assumptions." ] }
1901.06237
2909579777
Due to concerns about human error in crowdsourcing, it is standard practice to collect labels for the same data point from multiple internet workers. We here show that the resulting budget can be used more effectively with a flexible worker assignment strategy that asks fewer workers to analyze easy-to-label data and more workers to analyze data that requires extra scrutiny. Our main contribution is to show how the allocations of the number of workers to a task can be computed optimally based on task features alone, without using worker profiles. Our target tasks are delineating cells in microscopy images and analyzing the sentiment toward the 2016 U.S. presidential candidates in tweets. We first propose an algorithm that computes budget-optimized crowd worker allocation (BUOCA). We next train a machine learning system (BUOCA-ML) that predicts an optimal number of crowd workers needed to maximize the accuracy of the labeling. We show that the computed allocation can yield large savings in the crowdsourcing budget (up to 49 percent points) while maintaining labeling accuracy. Finally, we envisage a human-machine system for performing budget-optimized data analysis at a scale beyond the feasibility of crowdsourcing.
Our work is distinct from prior work in that our system not only learns an optimal crowd worker allocation that is adapted to task difficulty, but also a mapping from data features to crowd worker allocations. In their award-winning paper, @cite_2 addressed a related data-focused problem -- how to solicit fewer human responses when answer agreement is expected and more responses otherwise, based on predicting from a visual question whether a crowd would agree on one answer. Their method computes the required budget after a classifier had been applied to rank the ambiguity in the data. Their system solicits at most five answers for the @math data points predicted to reflect the greatest likelihood for crowd disagreement and one answer for the remaining visual questions, where @math is the extra budget available. In our paradigm, the output of BUOCA determines the specific budget level when a sufficient number of data has been labeled so that the training of BUOCA-ML is expected to be successful. BUOCA-ML is then trained with the labels obtained with this training budget (note that the training budget is different from the budget needed to apply BUOCA-ML in phase 2).
{ "cite_N": [ "@cite_2" ], "mid": [ "2610525153" ], "abstract": [ "Visual question answering systems empower users to ask any question about any image and receive a valid answer. However, existing systems do not yet account for the fact that a visual question can lead to a single answer or multiple different answers. While a crowd often agrees, disagreements do arise for many reasons including that visual questions are ambiguous, subjective, or difficult. We propose a model, CrowdVerge, for automatically predicting from a visual question whether a crowd would agree on one answer. We then propose how to exploit these predictions in a novel application to efficiently collect all valid answers to visual questions. Specifically, we solicit fewer human responses when answer agreement is expected and more human responses otherwise. Experiments on 121,811 visual questions asked by sighted and blind people show that, compared to existing crowdsourcing systems, our system captures the same answer diversity with typically 14-23 less crowd involvement." ] }
1901.06237
2909579777
Due to concerns about human error in crowdsourcing, it is standard practice to collect labels for the same data point from multiple internet workers. We here show that the resulting budget can be used more effectively with a flexible worker assignment strategy that asks fewer workers to analyze easy-to-label data and more workers to analyze data that requires extra scrutiny. Our main contribution is to show how the allocations of the number of workers to a task can be computed optimally based on task features alone, without using worker profiles. Our target tasks are delineating cells in microscopy images and analyzing the sentiment toward the 2016 U.S. presidential candidates in tweets. We first propose an algorithm that computes budget-optimized crowd worker allocation (BUOCA). We next train a machine learning system (BUOCA-ML) that predicts an optimal number of crowd workers needed to maximize the accuracy of the labeling. We show that the computed allocation can yield large savings in the crowdsourcing budget (up to 49 percent points) while maintaining labeling accuracy. Finally, we envisage a human-machine system for performing budget-optimized data analysis at a scale beyond the feasibility of crowdsourcing.
A flexible crowdsourcing scheme that collects additional labels for tweets that are estimated to be difficult to understand because they contain sarcasm has been proposed by @cite_0 . Their estimation is based on a Natural Language Processing (NLP) analysis, for example, whether the tweet included texting lingo, such as lol , rofl , or OMG , or the tweeter highlighted words by writing them with all capital letters. In our work, we also use NLP tools to analyze the labeling difficulty of tweets, including sarcasm. Different from the work by @cite_0 , which relies on handcrafted decision trees to compute the number of workers to allocate to a specific tweet, we propose a general, automatic scheme to allocate workers.
{ "cite_N": [ "@cite_0" ], "mid": [ "2072514260" ], "abstract": [ "An increasing number of studies in political communication focus on the “sentiment” or “tone” of news content, political speeches, or advertisements. This growing interest in measuring sentiment coincides with a dramatic increase in the volume of digitized information. Computer automation has a great deal of potential in this new media environment. The objective here is to outline and validate a new automated measurement instrument for sentiment analysis in political texts. Our instrument uses a dictionary-based approach consisting of a simple word count of the frequency of keywords in a text from a predefined dictionary. The design of the freely available Lexicoder Sentiment Dictionary (LSD) is discussed in detail here. The dictionary is tested against a body of human-coded news content, and the resulting codes are also compared to results from nine existing content-analytic dictionaries. Analyses suggest that the LSD produces results that are more systematically related to human coding than are results ..." ] }
1901.06237
2909579777
Due to concerns about human error in crowdsourcing, it is standard practice to collect labels for the same data point from multiple internet workers. We here show that the resulting budget can be used more effectively with a flexible worker assignment strategy that asks fewer workers to analyze easy-to-label data and more workers to analyze data that requires extra scrutiny. Our main contribution is to show how the allocations of the number of workers to a task can be computed optimally based on task features alone, without using worker profiles. Our target tasks are delineating cells in microscopy images and analyzing the sentiment toward the 2016 U.S. presidential candidates in tweets. We first propose an algorithm that computes budget-optimized crowd worker allocation (BUOCA). We next train a machine learning system (BUOCA-ML) that predicts an optimal number of crowd workers needed to maximize the accuracy of the labeling. We show that the computed allocation can yield large savings in the crowdsourcing budget (up to 49 percent points) while maintaining labeling accuracy. Finally, we envisage a human-machine system for performing budget-optimized data analysis at a scale beyond the feasibility of crowdsourcing.
Related Methods for Image Segmentation. Many solutions have been proposed for crowdsourcing the task of image segmentation. The most common proposed solution requires task requesters to collect redundant data from multiple crowd workers and uses majority voting (e.g., majority of the decisions of 5 workers per task @cite_5 ). In one study, as much as 32 obtained from internet workers had to be discarded @cite_4 . Our study shows that intelligent allocation of crowd efforts can be used to achieve high quality segmentation while satisfying budget constraints.
{ "cite_N": [ "@cite_5", "@cite_4" ], "mid": [ "1999045972", "2055302526" ], "abstract": [ "Analyses of biomedical images often rely on demarcating the boundaries of biological structures (segmentation). While numerous approaches are adopted to address the segmentation problem including collecting annotations from domain-experts and automated algorithms, the lack of comparative benchmarking makes it challenging to determine the current state-of-art, recognize limitations of existing approaches, and identify relevant future research directions. To provide practical guidance, we evaluated and compared the performance of trained experts, crowd sourced non-experts, and algorithms for annotating 305 objects coming from six datasets that include phase contrast, fluorescence, and magnetic resonance images. Compared to the gold standard established by expert consensus, we found the best annotators were experts, followed by non-experts, and then algorithms. This analysis revealed that online paid crowd sourced workers without domain-specific backgrounds are reliable annotators to use as part of the laboratory protocol for segmenting biomedical images. We also found that fusing the segmentations created by crowd sourced internet workers and algorithms yielded improved segmentation results over segmentations created by single crowd sourced or algorithm annotations respectively. We invite extensions of our work by sharing our data sets and associated segmentation annotations (http: www.cs.bu.edu betke Biomedical Image Segmentation).", "The appearance of surfaces in real-world scenes is determined by the materials, textures, and context in which the surfaces appear. However, the datasets we have for visualizing and modeling rich surface appearance in context, in applications such as home remodeling, are quite limited. To help address this need, we present OpenSurfaces, a rich, labeled database consisting of thousands of examples of surfaces segmented from consumer photographs of interiors, and annotated with material parameters (reflectance, material names), texture information (surface normals, rectified textures), and contextual information (scene category, and object names). Retrieving usable surface information from uncalibrated Internet photo collections is challenging. We use human annotations and present a new methodology for segmenting and annotating materials in Internet photo collections suitable for crowdsourcing (e.g., through Amazon's Mechanical Turk). Because of the noise and variability inherent in Internet photos and novice annotators, designing this annotation engine was a key challenge; we present a multi-stage set of annotation tasks with quality checks and validation. We demonstrate the use of this database in proof-of-concept applications including surface retexturing and material and image browsing, and discuss future uses. OpenSurfaces is a public resource available at http: opensurfaces.cs.cornell.edu ." ] }
1901.06199
2909896778
Generative Adversarial Networks (GAN) receive great attentions recently due to its excellent performance in image generation, transformation, and super-resolution. However, GAN has rarely been studied and trained for classification, leading that the generated images may not be appropriate for classification. In this paper, we propose a novel Generative Adversarial Classifier (GAC) particularly for low-resolution Handwriting Character Recognition. Specifically, involving additionally a classifier in the training process of normal GANs, GAC is calibrated for learning suitable structures and restored characters images that benefits the classification. Experimental results show that our proposed method can achieve remarkable performance in handwriting characters 8x super-resolution, approximately 10 and 20 higher than the present state-of-the-art methods respectively on benchmark data CASIA-HWDB1.1 and MNIST.
The research of image super-resolution can be divided into two categories: one is based on single image super-resolution (SISR), and the other is based on multiple image super-resolution (MISR) @cite_14 . Our work can be cast into the first category. We will focus on single image super-resolution (SISR) and will not further discuss approaches that reconstruct HR images from multiple images.
{ "cite_N": [ "@cite_14" ], "mid": [ "1811400895" ], "abstract": [ "Growing interest in super-resolution (SR) restoration of video sequences and the closed related problem of construction of SR still images from image sequences has led to the emergence of several competing methodologies. We review the state of the art of SR techniques using a taxonomy of existing techniques. We critique these methods and identified areas which promise performance improvements." ] }
1901.06199
2909896778
Generative Adversarial Networks (GAN) receive great attentions recently due to its excellent performance in image generation, transformation, and super-resolution. However, GAN has rarely been studied and trained for classification, leading that the generated images may not be appropriate for classification. In this paper, we propose a novel Generative Adversarial Classifier (GAC) particularly for low-resolution Handwriting Character Recognition. Specifically, involving additionally a classifier in the training process of normal GANs, GAC is calibrated for learning suitable structures and restored characters images that benefits the classification. Experimental results show that our proposed method can achieve remarkable performance in handwriting characters 8x super-resolution, approximately 10 and 20 higher than the present state-of-the-art methods respectively on benchmark data CASIA-HWDB1.1 and MNIST.
Recently, convolutional neural network (CNN) based SR algorithms have shown excellent performance. In @cite_1 , the authors encoded a sparse representation prior into a feed-forward network architecture based on the learned iterative shrinkage and thresholding algorithm (LISTA) @cite_8 . @cite_24 @cite_7 used bicubic interpolation to downscale an image as input image and trained a three layer convolutional network end-to-end. The deeply-recursive convolutional network (DRCN) @cite_17 is a highly effective architecture that allows long-range pixel dependencies while keeping the number of model parameters small. @cite_10 and @cite_25 proposed a perceptual loss function to reconstruct visually more convincing HR images.
{ "cite_N": [ "@cite_7", "@cite_8", "@cite_1", "@cite_24", "@cite_10", "@cite_25", "@cite_17" ], "mid": [ "", "2118103795", "1919542679", "54257720", "2331128040", "2196707239", "2214802144" ], "abstract": [ "", "In Sparse Coding (SC), input vectors are reconstructed using a sparse linear combination of basis vectors. SC has become a popular method for extracting features from data. For a given input, SC minimizes a quadratic reconstruction error with an L1 penalty term on the code. The process is often too slow for applications such as real-time pattern recognition. We proposed two versions of a very fast algorithm that produces approximate estimates of the sparse code that can be used to compute good visual features, or to initialize exact iterative algorithms. The main idea is to train a non-linear, feed-forward predictor with a specific architecture and a fixed depth to produce the best possible approximation of the sparse code. A version of the method, which can be seen as a trainable version of Li and Osher's coordinate descent method, is shown to produce approximate solutions with 10 times less computation than Li and Os-her's for the same approximation error. Unlike previous proposals for sparse code predictors, the system allows a kind of approximate \"explaining away\" to take place during inference. The resulting predictor is differentiable and can be included into globally-trained recognition systems.", "Deep learning techniques have been successfully applied in many areas of computer vision, including low-level image restoration problems. For image super-resolution, several models based on deep neural networks have been recently proposed and attained superior performance that overshadows all previous handcrafted models. The question then arises whether large-capacity and data-driven models have become the dominant solution to the ill-posed super-resolution problem. In this paper, we argue that domain expertise represented by the conventional sparse coding model is still valuable, and it can be combined with the key ingredients of deep learning to achieve further improved results. We show that a sparse coding model particularly designed for super-resolution can be incarnated as a neural network, and trained in a cascaded structure from end to end. The interpretation of the network based on sparse coding leads to much more efficient and effective training, as well as a reduced model size. Our model is evaluated on a wide range of images, and shows clear advantage over existing state-of-the-art methods in terms of both restoration accuracy and human subjective quality.", "We propose a deep learning method for single image super-resolution (SR). Our method directly learns an end-to-end mapping between the low high-resolution images. The mapping is represented as a deep convolutional neural network (CNN) [15] that takes the low-resolution image as the input and outputs the high-resolution one. We further show that traditional sparse-coding-based SR methods can also be viewed as a deep convolutional network. But unlike traditional methods that handle each component separately, our method jointly optimizes all layers. Our deep CNN has a lightweight structure, yet demonstrates state-of-the-art restoration quality, and achieves fast speed for practical on-line usage.", "We consider image transformation problems, where an input image is transformed into an output image. Recent methods for such problems typically train feed-forward convolutional neural networks using a per-pixel loss between the output and ground-truth images. Parallel work has shown that high-quality images can be generated by defining and optimizing perceptual loss functions based on high-level features extracted from pretrained networks. We combine the benefits of both approaches, and propose the use of perceptual loss functions for training feed-forward networks for image transformation tasks. We show results on image style transfer, where a feed-forward network is trained to solve the optimization problem proposed by in real-time. Compared to the optimization-based method, our network gives similar qualitative results but is three orders of magnitude faster. We also experiment with single-image super-resolution, where replacing a per-pixel loss with a perceptual loss gives visually pleasing results.", "Inverse problems in image and audio, and super-resolution in particular, can be seen as high-dimensional structured prediction problems, where the goal is to characterize the conditional distribution of a high-resolution output given its low-resolution corrupted observation. When the scaling ratio is small, point estimates achieve impressive performance, but soon they suffer from the regression-to-the-mean problem, result of their inability to capture the multi-modality of this conditional distribution. Modeling high-dimensional image and audio distributions is a hard task, requiring both the ability to model complex geometrical structures and textured regions. In this paper, we propose to use as conditional model a Gibbs distribution, where its sufficient statistics are given by deep convolutional neural networks. The features computed by the network are stable to local deformation, and have reduced variance when the input is a stationary texture. These properties imply that the resulting sufficient statistics minimize the uncertainty of the target signals given the degraded observations, while being highly informative. The filters of the CNN are initialized by multiscale complex wavelets, and then we propose an algorithm to fine-tune them by estimating the gradient of the conditional log-likelihood, which bears some similarities with Generative Adversarial Networks. We evaluate experimentally the proposed approach in the image super-resolution task, but the approach is general and could be used in other challenging ill-posed problems such as audio bandwidth extension.", "We propose an image super-resolution method (SR) using a deeply-recursive convolutional network (DRCN). Our network has a very deep recursive layer (up to 16 recursions). Increasing recursion depth can improve performance without introducing new parameters for additional convolutions. Albeit advantages, learning a DRCN is very hard with a standard gradient descent method due to exploding vanishing gradients. To ease the difficulty of training, we propose two extensions: recursive-supervision and skip-connection. Our method outperforms previous methods by a large margin." ] }
1901.06199
2909896778
Generative Adversarial Networks (GAN) receive great attentions recently due to its excellent performance in image generation, transformation, and super-resolution. However, GAN has rarely been studied and trained for classification, leading that the generated images may not be appropriate for classification. In this paper, we propose a novel Generative Adversarial Classifier (GAC) particularly for low-resolution Handwriting Character Recognition. Specifically, involving additionally a classifier in the training process of normal GANs, GAC is calibrated for learning suitable structures and restored characters images that benefits the classification. Experimental results show that our proposed method can achieve remarkable performance in handwriting characters 8x super-resolution, approximately 10 and 20 higher than the present state-of-the-art methods respectively on benchmark data CASIA-HWDB1.1 and MNIST.
Generative Adversarial Nets (GAN) is proposed by Goodfellow @cite_6 which contains two parts, a generator and a discriminator. The generator is responsible for generating images close to the real pictures to fool the discriminator, and the discriminator is responsible to discriminate the picture from the generator or real pictures. Adversarial examples problem is also proposed and there are many methods to solve it such as @cite_28 .
{ "cite_N": [ "@cite_28", "@cite_6" ], "mid": [ "2963341057", "2099471712" ], "abstract": [ "Adversarial examples are augmented data points generated by imperceptible perturbation of input samples. They have recently drawn much attention with the machine learning and data mining community. Being difficult to distinguish from real examples, such adversarial examples could change the prediction of many of the best learning models including the state-of-the-art deep learning models. Recent attempts have been made to build robust models that take into account adversarial examples. However, these methods can either lead to performance drops or lack mathematical motivations. In this paper, we propose a unified framework to build robust machine learning models against adversarial examples. More specifically, using the unified framework, we develop a family of gradient regularization methods that effectively penalize the gradient of loss function w.r.t. inputs. Our proposed framework is appealing in that it offers a unified view to deal with adversarial examples. It incorporates another recently-proposed perturbation based approach as a special case. In addition, we present some visual effects that reveals semantic meaning in those perturbations, and thus support our regularization method and provide another explanation for generalizability of adversarial examples. By applying this technique to Maxout networks, we conduct a series of experiments and achieve encouraging results on two benchmark datasets. In particular, we attain the best accuracy on MNIST data (without data augmentation) and competitive performance on CIFAR-10 data.", "We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to ½ everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples." ] }
1901.06199
2909896778
Generative Adversarial Networks (GAN) receive great attentions recently due to its excellent performance in image generation, transformation, and super-resolution. However, GAN has rarely been studied and trained for classification, leading that the generated images may not be appropriate for classification. In this paper, we propose a novel Generative Adversarial Classifier (GAC) particularly for low-resolution Handwriting Character Recognition. Specifically, involving additionally a classifier in the training process of normal GANs, GAC is calibrated for learning suitable structures and restored characters images that benefits the classification. Experimental results show that our proposed method can achieve remarkable performance in handwriting characters 8x super-resolution, approximately 10 and 20 higher than the present state-of-the-art methods respectively on benchmark data CASIA-HWDB1.1 and MNIST.
In 2016, @cite_21 proposed DCGAN which is stable in most settings and shows the vector arithmetics as an intrinsic property of the representations learned by the Generator. @cite_23 proposed the conditional GAN, the idea is to use labels for some data to help network build salient representations, it can control the generator's outputs without changing the architecture by adding label as another input to the generator. @cite_27 proposed the SRGAN by reconstructing the HR image with GAN based on Resnet @cite_19 and it achieves remarkable performance in human vision but low PSNR. The Triple-GAN is proposed by @cite_0 which contains three parts, a classifier @math that (approximately) characterizes the conditional distribution @math , a class-conditional generator @math that (approximately) characterizes the conditional distribution in the other direction @math , and a discriminator @math that distinguishes whether a pair of data @math comes from the true distribution @math , the final goal of Triple-GAN is to predict the labels @math for unlabeled data as well as to generate new samples @math conditioned on @math .
{ "cite_N": [ "@cite_21", "@cite_0", "@cite_19", "@cite_27", "@cite_23" ], "mid": [ "2173520492", "2964218010", "", "2523714292", "2125389028" ], "abstract": [ "In recent years, supervised learning with convolutional networks (CNNs) has seen huge adoption in computer vision applications. Comparatively, unsupervised learning with CNNs has received less attention. In this work we hope to help bridge the gap between the success of CNNs for supervised learning and unsupervised learning. We introduce a class of CNNs called deep convolutional generative adversarial networks (DCGANs), that have certain architectural constraints, and demonstrate that they are a strong candidate for unsupervised learning. Training on various image datasets, we show convincing evidence that our deep convolutional adversarial pair learns a hierarchy of representations from object parts to scenes in both the generator and discriminator. Additionally, we use the learned features for novel tasks - demonstrating their applicability as general image representations.", "Generative Adversarial Nets (GANs) have shown promise in image generation and semi-supervised learning (SSL). However, existing GANs in SSL have two problems: (1) the generator and the discriminator (i.e. the classifier) may not be optimal at the same time; and (2) the generator cannot control the semantics of the generated samples. The problems essentially arise from the two-player formulation, where a single discriminator shares incompatible roles of identifying fake samples and predicting labels and it only estimates the data without considering the labels. To address the problems, we present triple generative adversarial net (Triple-GAN), which consists of three players---a generator, a discriminator and a classifier. The generator and the classifier characterize the conditional distributions between images and labels, and the discriminator solely focuses on identifying fake image-label pairs. We design compatible utilities to ensure that the distributions characterized by the classifier and the generator both converge to the data distribution. Our results on various datasets demonstrate that Triple-GAN as a unified model can simultaneously (1) achieve the state-of-the-art classification results among deep generative models, and (2) disentangle the classes and styles of the input and transfer smoothly in the data space via interpolation in the latent space class-conditionally.", "", "Despite the breakthroughs in accuracy and speed of single image super-resolution using faster and deeper convolutional neural networks, one central problem remains largely unsolved: how do we recover the finer texture details when we super-resolve at large upscaling factors? The behavior of optimization-based super-resolution methods is principally driven by the choice of the objective function. Recent work has largely focused on minimizing the mean squared reconstruction error. The resulting estimates have high peak signal-to-noise ratios, but they are often lacking high-frequency details and are perceptually unsatisfying in the sense that they fail to match the fidelity expected at the higher resolution. In this paper, we present SRGAN, a generative adversarial network (GAN) for image super-resolution (SR). To our knowledge, it is the first framework capable of inferring photo-realistic natural images for 4x upscaling factors. To achieve this, we propose a perceptual loss function which consists of an adversarial loss and a content loss. The adversarial loss pushes our solution to the natural image manifold using a discriminator network that is trained to differentiate between the super-resolved images and original photo-realistic images. In addition, we use a content loss motivated by perceptual similarity instead of similarity in pixel space. Our deep residual network is able to recover photo-realistic textures from heavily downsampled images on public benchmarks. An extensive mean-opinion-score (MOS) test shows hugely significant gains in perceptual quality using SRGAN. The MOS scores obtained with SRGAN are closer to those of the original high-resolution images than to those obtained with any state-of-the-art method.", "Generative Adversarial Nets [8] were recently introduced as a novel way to train generative models. In this work we introduce the conditional version of generative adversarial nets, which can be constructed by simply feeding the data, y, we wish to condition on to both the generator and discriminator. We show that this model can generate MNIST digits conditioned on class labels. We also illustrate how this model could be used to learn a multi-modal model, and provide preliminary examples of an application to image tagging in which we demonstrate how this approach can generate descriptive tags which are not part of training labels." ] }
1907.07469
2960922699
In this work, we propose an edge detection algorithm by estimating a lifetime of an event produced from dynamic vision sensor (DVS), also known as event camera. The event camera, unlike traditional CMOS camera, generates sparse event data at a pixel whose log-intensity changes. Due to this characteristic, theoretically, there is only one or no event at the specific time, which makes it difficult to grasp the world captured by the camera at a particular moment. In this work, we present an algorithm that keeps the event alive until the corresponding event is generated in a nearby pixel so that the shape of an edge is preserved. Particularly, we consider a pixel area to fit a plane on Surface of Active Events (SAE) and call the point inside the pixel area closest to the plane as a intra-pixel-area event. These intra-pixel-area events help the fitting plane algorithm to estimate life time robustly and precisely. Our algorithm performs better in terms of sharpness and similarity metric than the accumulation of events over fixed counts or time intervals, when compared with the existing edge detection algorithms, both qualitatively and quantitatively.
Some research aim to detect edges, not just line segments that are frequently found in artifacts. F. Barranco al @cite_0 detects the contour of foreground objects. They extract features from the accumulated events such as orientation, timestamp, motion, and time texture. Then the boundary is predicted from the learned Structured Random Forest (SRF) given DVS features. However, since this algorithm is developed for object segmentation, it is prioritised to detect the boundary of the foreground, and the performance of the overall edge extraction may be degraded. E. Mueggler al @cite_8 estimates the lifetime of event from local plane fitting on the SAE based on event-based visual flow @cite_4 . However, na "ive RANdom SAmple Consensus (RANSAC) method could not be successfully adapted for the event camera, thus causing imprecise estimation. Therefore, we propose a intra-pixel-area approach for RANSAC in order to estimate a local plane robustly and precisely. Also, we quantitatively evaluate algorithms in terms of similarity metric, which have not been done in most of the previous works.
{ "cite_N": [ "@cite_0", "@cite_4", "@cite_8" ], "mid": [ "2216124221", "1975697167", "1539361405" ], "abstract": [ "The bio-inspired, asynchronous event-based dynamic vision sensor records temporal changes in the luminance of the scene at high temporal resolution. Since events are only triggered at significant luminance changes, most events occur at the boundary of objects and their parts. The detection of these contours is an essential step for further interpretation of the scene. This paper presents an approach to learn the location of contours and their border ownership using Structured Random Forests on event-based features that encode motion, timing, texture, and spatial orientations. The classifier integrates elegantly information over time by utilizing the classification results previously computed. Finally, the contour detection and boundary assignment are demonstrated in a layer-segmentation of the scene. Experimental results demonstrate good performance in boundary detection and segmentation.", "This paper introduces a new methodology to compute dense visual flow using the precise timings of spikes from an asynchronous event-based retina. Biological retinas, and their artificial counterparts, are totally asynchronous and data-driven and rely on a paradigm of light acquisition radically different from most of the currently used frame-grabber technologies. This paper introduces a framework to estimate visual flow from the local properties of events' spatiotemporal space. We will show that precise visual flow orientation and amplitude can be estimated using a local differential approach on the surface defined by coactive events. Experimental results are presented; they show the method adequacy with high data sparseness and temporal resolution of event-based acquisition that allows the computation of motion flow with microsecond accuracy and at very low computational cost.", "We propose an algorithm to estimate the “lifetime” of events from retinal cameras, such as a Dynamic Vision Sensor (DVS). Unlike standard CMOS cameras, a DVS only transmits pixel-level brightness changes (“events”) at the time they occur with micro-second resolution. Due to its low latency and sparse output, this sensor is very promising for high-speed mobile robotic applications. We develop an algorithm that augments each event with its lifetime, which is computed from the event's velocity on the image plane. The generated stream of augmented events gives a continuous representation of events in time, hence enabling the design of new algorithms that outperform those based on the accumulation of events over fixed, artificially-chosen time intervals. A direct application of this augmented stream is the construction of sharp gradient (edge-like) images at any time instant. We successfully demonstrate our method in different scenarios, including high-speed quadrotor flips, and compare it to standard visualization methods." ] }
1907.07581
2958911020
Online personalized news product needs a suitable cover for the article. The news cover demands to be with high image quality, and draw readers' attention at same time, which is extraordinary challenging due to the subjectivity of the task. In this paper, we assess the news cover from image clarity and object salience perspective. We propose an end-to-end multi-task learning network for image clarity assessment and semantic segmentation simultaneously, the results of which can be guided for news cover assessment. The proposed network is based on a modified DeepLabv3+ model. The network backbone is used for multiple scale spatial features exaction, followed by two branches for image clarity assessment and semantic segmentation, respectively. The experiment results show that the proposed model is able to capture important content in images and performs better than single-task learning baselines on our proposed game content based CIA dataset.
. Human visual system is highly sensitive to edge and contour information of an image @cite_17 . Some IQA studies take edge structure information as the main image quality consideration, for example, in @cite_13 the authors apply edge information for both blur and noise detection, which are the major factors on image quality degradation. In @cite_19 , an edge model is employed to extract salient edge information for screen content images assessment, which outperforms the other state-of-the-art IQA models of the day.
{ "cite_N": [ "@cite_19", "@cite_13", "@cite_17" ], "mid": [ "2508724573", "", "2032586563" ], "abstract": [ "Since the human visual system (HVS) is highly sensitive to edges, a novel image quality assessment (IQA) metric for assessing screen content images (SCIs) is proposed in this paper. The turnkey novelty lies in the use of an existing parametric edge model to extract two types of salient attributes — namely, edge contrast and edge width, for the distorted SCI under assessment and its original SCI, respectively. The extracted information is subject to conduct similarity measurements on each attribute, independently. The obtained similarity scores are then combined using our proposed edge-width pooling strategy to generate the final IQA score. Hopefully, this score is consistent with the judgment made by the HVS. Experimental results have shown that the proposed IQA metric produces higher consistency with that of the HVS on the evaluation of the image quality of the distorted SCI than that of other state-of-the-art IQA metrics.", "", "Objective image video quality metrics which accurately represent the subjective quality of processed images are of paramount importance for the design and assessment of an image compression and transmission system. In some scenarios, it is also important to evaluate the quality of the received image with minimal reference to the transmitted one. For instance, for closed-loop optimization of a transmission system, the image quality measure can be evaluated at the receiver and provided as feedback information to the system controller. The original image - prior to compression and transmission - is not usually available at the receiver side, and it is important to rely at the receiver side on an objective quality metric that does not need reference or needs minimal reference to the original image. The observation that the human eye is very sensitive to edge and contour information of an image underpins the proposal of our reduced reference (RR) quality metric, which compares edge information between the distorted and the original image. Results highlight that the metric correlates well with subjective observations, also in comparison with commonly used full-reference metrics and with a state-of-the-art reduced reference metric." ] }
1907.07581
2958911020
Online personalized news product needs a suitable cover for the article. The news cover demands to be with high image quality, and draw readers' attention at same time, which is extraordinary challenging due to the subjectivity of the task. In this paper, we assess the news cover from image clarity and object salience perspective. We propose an end-to-end multi-task learning network for image clarity assessment and semantic segmentation simultaneously, the results of which can be guided for news cover assessment. The proposed network is based on a modified DeepLabv3+ model. The network backbone is used for multiple scale spatial features exaction, followed by two branches for image clarity assessment and semantic segmentation, respectively. The experiment results show that the proposed model is able to capture important content in images and performs better than single-task learning baselines on our proposed game content based CIA dataset.
In recent years, the idea of employing a CNN based approach for no-reference IQA (NR-IQA) tasks is arising, and meanwhile the performance of NR-IQA has been significantly improved under such methods @cite_24 @cite_14 . For example, in @cite_14 , a CNN is directly utilized for image quality prediction without a reference image, which integrates the feature learning and regression into one optimization process. One common ground behind those models is that these network architectures are shallower and narrower, which are not deep enough for learning high-level features. The emergence of deeper CNN, such as ResNet-101 @cite_7 and Xception @cite_8 , further promotes the representational abilities of those models. For example, DeepLabv3+ @cite_22 , employs atrous convolution to extract dense feature maps and capture global multiple scale context, resulting in significant performance improvement over semantic segmentation tasks. In @cite_20 , DeepLab based network is applied to excavate spatial features of hyper spectral images, and achieves outstanding performance.
{ "cite_N": [ "@cite_14", "@cite_22", "@cite_7", "@cite_8", "@cite_24", "@cite_20" ], "mid": [ "2051596736", "2964309882", "2194775991", "2531409750", "2894561647", "2754213847" ], "abstract": [ "In this work we describe a Convolutional Neural Network (CNN) to accurately predict image quality without a reference image. Taking image patches as input, the CNN works in the spatial domain without using hand-crafted features that are employed by most previous methods. The network consists of one convolutional layer with max and min pooling, two fully connected layers and an output node. Within the network structure, feature learning and regression are integrated into one optimization process, which leads to a more effective model for estimating image quality. This approach achieves state of the art performance on the LIVE dataset and shows excellent generalization ability in cross dataset experiments. Further experiments on images with local distortions demonstrate the local quality estimation ability of our CNN, which is rarely reported in previous literature.", "Spatial pyramid pooling module or encode-decoder structure are used in deep neural networks for semantic segmentation task. The former networks are able to encode multi-scale contextual information by probing the incoming features with filters or pooling operations at multiple rates and multiple effective fields-of-view, while the latter networks can capture sharper object boundaries by gradually recovering the spatial information. In this work, we propose to combine the advantages from both methods. Specifically, our proposed model, DeepLabv3+, extends DeepLabv3 by adding a simple yet effective decoder module to refine the segmentation results especially along object boundaries. We further explore the Xception model and apply the depthwise separable convolution to both Atrous Spatial Pyramid Pooling and decoder modules, resulting in a faster and stronger encoder-decoder network. We demonstrate the effectiveness of the proposed model on PASCAL VOC 2012 and Cityscapes datasets, achieving the test set performance of 89 and 82.1 without any post-processing. Our paper is accompanied with a publicly available reference implementation of the proposed models in Tensorflow at https: github.com tensorflow models tree master research deeplab.", "Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers—8× deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57 error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28 relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.", "We present an interpretation of Inception modules in convolutional neural networks as being an intermediate step in-between regular convolution and the depthwise separable convolution operation (a depthwise convolution followed by a pointwise convolution). In this light, a depthwise separable convolution can be understood as an Inception module with a maximally large number of towers. This observation leads us to propose a novel deep convolutional neural network architecture inspired by Inception, where Inception modules have been replaced with depthwise separable convolutions. We show that this architecture, dubbed Xception, slightly outperforms Inception V3 on the ImageNet dataset (which Inception V3 was designed for), and significantly outperforms Inception V3 on a larger image classification dataset comprising 350 million images and 17,000 classes. Since the Xception architecture has the same number of parameters as Inception V3, the performance gains are not due to increased capacity but rather to a more efficient use of model parameters.", "Recently, deep learning has been used for hyperspectral image classification (HSIC) due to its powerful feature learning and classification ability. In this letter, a novel deep learning-based framework based on DeepLab is proposed for HSIC. Inspired by the excellent performance of DeepLab in semantic segmentation, the proposed framework applies DeepLab to excavate spatial features of the hyperspectral image (HSI) pixel to pixel. It breaks through the limitation of patch-wise feature learning in the most of existing deep learning methods used in HSIC. More importantly, it can extract features at multiple scales and effectively avoid the reduction of spatial resolution. Furthermore, to improve the HSIC performance, the spatial features extracted by DeepLab and the spectral features are fused by a weighted fusion method, then the fused features are input into support vector machine for final classification. Experimental results on two public HSI data sets demonstrate that the proposed framework outperformed the traditional methods and the existing deep learning-based methods, especially for small-scale classes.", "Automatically learned quality assessment for images has recently become a hot topic due to its usefulness in a wide variety of applications, such as evaluating image capture pipelines, storage techniques, and sharing media. Despite the subjective nature of this problem, most existing methods only predict the mean opinion score provided by data sets, such as AVA and TID2013. Our approach differs from others in that we predict the distribution of human opinion scores using a convolutional neural network. Our architecture also has the advantage of being significantly simpler than other methods with comparable performance. Our proposed approach relies on the success (and retraining) of proven, state-of-the-art deep object recognition networks. Our resulting network can be used to not only score images reliably and with high correlation to human perception, but also to assist with adaptation and optimization of photo editing enhancement algorithms in a photographic pipeline. All this is done without need for a “golden” reference image, consequently allowing for single-image, semantic- and perceptually-aware, no-reference quality assessment." ] }
1907.07581
2958911020
Online personalized news product needs a suitable cover for the article. The news cover demands to be with high image quality, and draw readers' attention at same time, which is extraordinary challenging due to the subjectivity of the task. In this paper, we assess the news cover from image clarity and object salience perspective. We propose an end-to-end multi-task learning network for image clarity assessment and semantic segmentation simultaneously, the results of which can be guided for news cover assessment. The proposed network is based on a modified DeepLabv3+ model. The network backbone is used for multiple scale spatial features exaction, followed by two branches for image clarity assessment and semantic segmentation, respectively. The experiment results show that the proposed model is able to capture important content in images and performs better than single-task learning baselines on our proposed game content based CIA dataset.
. MTL is based on a fundamental idea that different tasks could share a common low level representation. In many computer vision tasks, MTL has exhibited advantages in performance improvement and memory saving. In @cite_6 , one unified architecture which jointly learn low-, mid-, and high-level vision tasks is introduced. With such a universal network, the tasks of boundary detection, normal estimation, saliency estimation, semantic segmentation, semantic boundary detection, proposal generation, and object detection can be simultaneously addressed. In @cite_21 , a multi-task learning network with "cross-stitch" units is proposed, which shows dramatically improved performance over one-task based baselines on the NYUv2 dataset @cite_3 . However, prior studies have not explored multi-task learning architecture or approach for IQA and semantic segmentation, which is our target method in this work.
{ "cite_N": [ "@cite_21", "@cite_3", "@cite_6" ], "mid": [ "2963877604", "125693051", "2963498646" ], "abstract": [ "Multi-task learning in Convolutional Networks has displayed remarkable success in the field of recognition. This success can be largely attributed to learning shared representations from multiple supervisory tasks. However, existing multi-task approaches rely on enumerating multiple network architectures specific to the tasks at hand, that do not generalize. In this paper, we propose a principled approach to learn shared representations in ConvNets using multitask learning. Specifically, we propose a new sharing unit: \"cross-stitch\" unit. These units combine the activations from multiple networks and can be trained end-to-end. A network with cross-stitch units can learn an optimal combination of shared and task-specific representations. Our proposed method generalizes across multiple tasks and shows dramatically improved performance over baseline methods for categories with few training examples.", "We present an approach to interpret the major surfaces, objects, and support relations of an indoor scene from an RGBD image. Most existing work ignores physical interactions or is applied only to tidy rooms and hallways. Our goal is to parse typical, often messy, indoor scenes into floor, walls, supporting surfaces, and object regions, and to recover support relationships. One of our main interests is to better understand how 3D cues can best inform a structured 3D interpretation. We also contribute a novel integer programming formulation to infer physical support relations. We offer a new dataset of 1449 RGBD images, capturing 464 diverse indoor scenes, with detailed annotations. Our experiments demonstrate our ability to infer support relations in complex scenes and verify that our 3D scene cues and inferred support lead to better object segmentation.", "In this work we train in an end-to-end manner a convolutional neural network (CNN) that jointly handles low-, mid-, and high-level vision tasks in a unified architecture. Such a network can act like a swiss knife for vision tasks, we call it an UberNet to indicate its overarching nature. The main contribution of this work consists in handling challenges that emerge when scaling up to many tasks. We introduce techniques that facilitate (i) training a deep architecture while relying on diverse training sets and (ii) training many (potentially unlimited) tasks with a limited memory budget. This allows us to train in an end-to-end manner a unified CNN architecture that jointly handles (a) boundary detection (b) normal estimation (c) saliency estimation (d) semantic segmentation (e) human part segmentation (f) semantic boundary detection, (g) region proposal generation and object detection. We obtain competitive performance while jointly addressing all tasks in 0.7 seconds on a GPU. Our system will be made publicly available." ] }
1907.07671
2958881886
Stress research is a rapidly emerging area in thefield of electroencephalography (EEG) based signal processing.The use of EEG as an objective measure for cost effective andpersonalized stress management becomes important in particularsituations such as the non-availability of mental health this http URL this study, long-term stress is classified using baseline EEGsignal recordings. The labelling for the stress and control groupsis performed using two methods (i) the perceived stress scalescore and (ii) expert evaluation. The frequency domain featuresare extracted from five-channel EEG recordings in addition tothe frontal and temporal alpha and beta asymmetries. The alphaasymmetry is computed from four channels and used as a feature.Feature selection is also performed using a t-test to identifystatistically significant features for both stress and control groups.We found that support vector machine is best suited to classifylong-term human stress when used with alpha asymmetry asa feature. It is observed that expert evaluation based labellingmethod has improved the classification accuracy up to 85.20 .Based on these results, it is concluded that alpha asymmetry maybe used as a potential bio-marker for stress classification, when labels are assigned using expert evaluation.
Hemispheric specialization is a major concern in neuro-physiological research. Generally, a healthy brain at rest has a fairly balanced level of activity in both hemispheres of brain @cite_30 . The left hemisphere is associated with the processing of positive emotions, while the right hemisphere is associated with the processing of negative emotions @cite_16 . The extent of asymmetry has been suggested to vary under conditions of chronic stress @cite_12 . Frontal asymmetry is highly related to post-traumatic stress disorder (PTSD) @cite_31 . The results in @cite_32 , have shown that major depression disorder (MDD) group is significantly right lateralized relative to controls, and both MDD and PTSD displayed more left- than right-frontal activity.
{ "cite_N": [ "@cite_30", "@cite_32", "@cite_31", "@cite_16", "@cite_12" ], "mid": [ "64229777", "1997785567", "1495009241", "2133085034", "2040325003" ], "abstract": [ "", "Abstract Spontaneously occurring brief periods of lower voltage irregular activity occurring amid a background of alpha activity (i.e., alpha blocking) in eyes-closed resting occipital EEG recordings from 32 healthy human subjects have been investigated to determine the extent of changes of mean frequency and of spectral purity (degree of regularity irregularity of the EEG activity) during such periods. New methods for determining mean frequency and spectral purity (the latter as a new measure, the Spectral Purity Index, which has a maximum value of 1.0 for a pure sine wave) permit their conjoint evaluation over a 0.5 sec window that is advanced along the EEG in 0.1 sec steps, thus permitting almost continuous feature extraction. The findings indicate that, although spectral purity invariably decreased during the periods of lower voltage irregular activity, the mean frequency remaiend relatively unaltered, i.e., it remained unchanged or it increased or decreased slightly but at most by 2.5 Hz. These results suggest that, at least for the periods of lower voltage irregular activity occurring spontaneously amid an alpha background during eyes-closed occipital EEG recordings, it may be inaccurate (as some authors have already suggested) to use the term ‘low-voltage fast (or beta) activity.’", "Abstract Background Considering the Research Domain Criteria (RDoC) framework, it is crucial to investigate posttraumatic stress disorder (PTSD) as a spectrum that ranges from normal to pathological. This dimensional approach is especially important to aid early PTSD detection and to guide better treatment options. In recent years, electroencephalography (EEG) has been used to investigate PTSD; however, reviews regarding EEG data related to PTSD are lacking, especially considering the dimensional approach. This systematic review examined the literature regarding EEG alterations in trauma-exposed people with posttraumatic stress symptoms (PTSS) to identify putative EEG biomarkers of PTSS severity. Method A systematic review of EEG studies of trauma-exposed participants with PTSS that reported dimensional analyses (e.g., correlations or regressions) between PTSS and EEG measures was performed. Results The literature search yielded 1178 references, of which 34 studies were eligible for inclusion. Despite variability among the reviewed studies, the PTSS severity was often associated with P2, P3-family event-related potentials (ERPs) and alpha rhythms. Limitations The search was limited to articles published in English; no information about non-published studies or studies reported in other languages was obtained. Another limitation was the heterogeneity of studies, which made meta-analysis challenging. Conclusions EEG provides promising candidates to act as biomarkers, although further studies are required to confirm the findings. Thus, EEG, in addition to being cheaper and easier to implement than other central techniques, has the potential to reveal biomarkers of PTSS severity.", "Abstract This commentary provides reflections on the current state of affairs in research on EEG frontal asymmetries associated with affect. Although considerable progress has occurred since the first report on this topic 25 years ago, research on frontal EEG asymmetries associated with affect has largely evolved in the absence of any serious connection with neuroscience research on the structure and function of the primate prefrontal cortex (PFC). Such integration is important as this work progresses since the neuroscience literature can help to understand what the prefrontal cortex is “doing” in affective processing. Data from the neuroscience literature on the heterogeneity of different sectors of the PFC are introduced and more specific hypotheses are offered about what different sectors of the PFC might be doing in affect. A number of methodological issues associated with EEG measures of functional prefrontal asymmetries are also considered.", "To test the hypothesis that activation asymmetries of the most anterior parts of the prefrontal cortex may be related to state-dependent regulation of emotion, spontaneous changes of cortical activation asymmetries from one session to a second one were related to spontaneous mood changes in two large samples (ns = 56 and 128). The interval between sessions was 2 to 4 weeks. Results show that mood changes specifically covary with changes of EEG asymmetry at the frontopolar electrode positions, but not with changes at other locations (dorsolateral frontal, temporal, and pariet al). Anxiety, tension, and depression were found to decrease when frontopolar activation asymmetry shifted to the right. Taking the new findings into account may contribute to the refinement and extension of theories on EEG laterality and emotion." ] }
1907.07543
2960456850
Despite the recent success of deep transfer learning approaches in NLP, there is a lack of quantitative studies demonstrating the gains these models offer in low-shot text classification tasks over existing paradigms. Deep transfer learning approaches such as BERT and ULMFiT demonstrate that they can beat state-of-the-art results on larger datasets, however when one has only 100-1000 labelled examples per class, the choice of approach is less clear, with classical machine learning and deep transfer learning representing valid options. This paper compares the current best transfer learning approach with top classical machine learning approaches on a trinary sentiment classification task to assess the best paradigm. We find that BERT, representing the best of deep transfer learning, is the best performing approach, outperforming top classical machine learning algorithms by 9.7 on average when trained with 100 examples per class, narrowing to 1.8 at 1000 labels per class. We also show the robustness of deep transfer learning in moving across domains, where the maximum loss in accuracy is only 0.7 in similar domain tasks and 3.2 cross domain, compared to classical machine learning which loses up to 20.6 .
It is well established that there is no single classical machine learning classifier that consistently achieves the best classification performance. For example between the works in @cite_17 @cite_4 @cite_3 they showed that various classical machine learning approaches all slightly out performed each other. This is a long known phenomenon that all of these models have different strengths depending on the specific task and dataset. As such we have considered two of these (Na "ive Bayes and SVM) to give a fair representation and alleviate the bias of a single classifier.
{ "cite_N": [ "@cite_4", "@cite_3", "@cite_17" ], "mid": [ "1823790170", "", "2595653137" ], "abstract": [ "Cyberbullying is becoming a major concern in online environments with troubling consequences. However, most of the technical studies have focused on the detection of cyberbullying through identifying harassing comments rather than preventing the incidents by detecting the bullies. In this work we study the automatic detection of bully users on YouTube. We compare three types of automatic detection: an expert system, supervised machine learning models, and a hybrid type combining the two. All these systems assign a score indicating the level of “bulliness” of online bullies. We demonstrate that the expert system outperforms the machine learning models. The hybrid classifier shows an even better performance.", "", "A key challenge for automatic hate-speech detection on social media is the separation of hate speech from other instances of offensive language. Lexical detection methods tend to have low precision because they classify all messages containing particular terms as hate speech and previous work using supervised learning has failed to distinguish between the two categories. We used a crowd-sourced hate speech lexicon to collect tweets containing hate speech keywords. We use crowd-sourcing to label a sample of these tweets into three categories: those containing hate speech, only offensive language, and those with neither. We train a multi-class classifier to distinguish between these different categories. Close analysis of the predictions and the errors shows when we can reliably separate hate speech from other offensive language and when this differentiation is more difficult. We find that racist and homophobic tweets are more likely to be classified as hate speech but that sexist tweets are generally classified as offensive. Tweets without explicit hate keywords are also more difficult to classify." ] }
1907.07613
2960281739
Template-matching methods for visual tracking have gained popularity recently due to their good performance and fast speed. However, they lack effective ways to adapt to changes in the target object's appearance, making their tracking accuracy still far from state-of-the-art. In this paper, we propose a dynamic memory network to adapt the template to the target's appearance variations during tracking. The reading and writing process of the external memory is controlled by an LSTM network with the search feature map as input. A spatial attention mechanism is applied to concentrate the LSTM input on the potential target as the location of the target is at first unknown. To prevent aggressive model adaptivity, we apply gated residual template learning to control the amount of retrieved memory that is used to combine with the initial template. In order to alleviate the drift problem, we also design a "negative" memory unit that stores templates for distractors, which are used to cancel out wrong responses from the object template. To further boost the tracking performance, an auxiliary classification loss is added after the feature extractor part. Unlike tracking-by-detection methods where the object's information is maintained by the weight parameters of neural networks, which requires expensive online fine-tuning to be adaptable, our tracker runs completely feed-forward and adapts to the target's appearance changes by updating the external memory. Moreover, the capacity of our model is not determined by the network size as with other trackers --- the capacity can be easily enlarged as the memory requirements of a task increase, which is favorable for memorizing long-term object information. Extensive experiments on the OTB and VOT datasets demonstrate that our trackers perform favorably against state-of-the-art tracking methods while retaining real-time speed.
In this section, we review related work on tracking-by-detection, tracking by template-matching, memory networks and multi-task learning. A preliminary version of our work appears in ECCV 2018 @cite_63 . This paper contains additional improvements in both methodology and experiments, including: 1) we propose a negative memory unit that stores distractor templates to cancel out wrong responses from the object template; 2) we design an auxiliary classification loss to facilitate the tracker's robustness to appearance changes; 3) we conduct comprehensive experiments on the VOT datasets, including VOT-2015, VOT-2016 and VOT-2017.
{ "cite_N": [ "@cite_63" ], "mid": [ "2963471260" ], "abstract": [ "Template-matching methods for visual tracking have gained popularity recently due to their comparable performance and fast speed. However, they lack effective ways to adapt to changes in the target object’s appearance, making their tracking accuracy still far from state-of-the-art. In this paper, we propose a dynamic memory network to adapt the template to the target’s appearance variations during tracking. An LSTM is used as a memory controller, where the input is the search feature map and the outputs are the control signals for the reading and writing process of the memory block. As the location of the target is at first unknown in the search feature map, an attention mechanism is applied to concentrate the LSTM input on the potential target. To prevent aggressive model adaptivity, we apply gated residual template learning to control the amount of retrieved memory that is used to combine with the initial template. Unlike tracking-by-detection methods where the object’s information is maintained by the weight parameters of neural networks, which requires expensive online fine-tuning to be adaptable, our tracker runs completely feed-forward and adapts to the target’s appearance changes by updating the external memory. Moreover, unlike other tracking methods where the model capacity is fixed after offline training – the capacity of our tracker can be easily enlarged as the memory requirements of a task increase, which is favorable for memorizing long-term object information. Extensive experiments on OTB and VOT demonstrates that our tracker MemTrack performs favorably against state-of-the-art tracking methods while retaining real-time speed of 50 fps." ] }
1907.07613
2960281739
Template-matching methods for visual tracking have gained popularity recently due to their good performance and fast speed. However, they lack effective ways to adapt to changes in the target object's appearance, making their tracking accuracy still far from state-of-the-art. In this paper, we propose a dynamic memory network to adapt the template to the target's appearance variations during tracking. The reading and writing process of the external memory is controlled by an LSTM network with the search feature map as input. A spatial attention mechanism is applied to concentrate the LSTM input on the potential target as the location of the target is at first unknown. To prevent aggressive model adaptivity, we apply gated residual template learning to control the amount of retrieved memory that is used to combine with the initial template. In order to alleviate the drift problem, we also design a "negative" memory unit that stores templates for distractors, which are used to cancel out wrong responses from the object template. To further boost the tracking performance, an auxiliary classification loss is added after the feature extractor part. Unlike tracking-by-detection methods where the object's information is maintained by the weight parameters of neural networks, which requires expensive online fine-tuning to be adaptable, our tracker runs completely feed-forward and adapts to the target's appearance changes by updating the external memory. Moreover, the capacity of our model is not determined by the network size as with other trackers --- the capacity can be easily enlarged as the memory requirements of a task increase, which is favorable for memorizing long-term object information. Extensive experiments on the OTB and VOT datasets demonstrate that our trackers perform favorably against state-of-the-art tracking methods while retaining real-time speed.
Tracking-by-detection treats object tracking as a detection problem within an ROI image, where an online learned classifier is used to distinguish the target from the background. The difficulty of updating the classifier to adapt to appearance variations is that the bounding box predicted on each frame may not be accurate, which produces degraded training samples and thus gradually causes the tracker to drift. Numerous algorithms have been designed to mitigate the sample ambiguity caused by inaccurate predicted bounding boxes. @cite_50 formulates the online model learning process in a semi-supervised fashion by combining a given prior and the trained classifier. @cite_23 proposes a multiple instance learning scheme to solve the problem of inaccurate examples for online training. Instead of only focusing on facilitating the training process of the tracker, @cite_80 decomposes the tracking task into three parts---tracking, learning and detection, where a optical flow tracker is used for frame-to-frame tracking and an online trained detector is adopted to re-detect the target when drifting occurs.
{ "cite_N": [ "@cite_80", "@cite_23", "@cite_50" ], "mid": [ "2124211486", "2109579504", "1807914171" ], "abstract": [ "This paper investigates long-term tracking of unknown objects in a video stream. The object is defined by its location and extent in a single frame. In every frame that follows, the task is to determine the object's location and extent or indicate that the object is not present. We propose a novel tracking framework (TLD) that explicitly decomposes the long-term tracking task into tracking, learning, and detection. The tracker follows the object from frame to frame. The detector localizes all appearances that have been observed so far and corrects the tracker if necessary. The learning estimates the detector's errors and updates it to avoid these errors in the future. We study how to identify the detector's errors and learn from them. We develop a novel learning method (P-N learning) which estimates the errors by a pair of “experts”: (1) P-expert estimates missed detections, and (2) N-expert estimates false alarms. The learning process is modeled as a discrete dynamical system and the conditions under which the learning guarantees improvement are found. We describe our real-time implementation of the TLD framework and the P-N learning. We carry out an extensive quantitative evaluation which shows a significant improvement over state-of-the-art approaches.", "In this paper, we address the problem of tracking an object in a video given its location in the first frame and no other information. Recently, a class of tracking techniques called “tracking by detection” has been shown to give promising results at real-time speeds. These methods train a discriminative classifier in an online manner to separate the object from the background. This classifier bootstraps itself by using the current tracker state to extract positive and negative examples from the current frame. Slight inaccuracies in the tracker can therefore lead to incorrectly labeled training examples, which degrade the classifier and can cause drift. In this paper, we show that using Multiple Instance Learning (MIL) instead of traditional supervised learning avoids these problems and can therefore lead to a more robust tracker with fewer parameter tweaks. We propose a novel online MIL algorithm for object tracking that achieves superior results with real-time performance. We present thorough experimental results (both qualitative and quantitative) on a number of challenging video clips.", "Recently, on-line adaptation of binary classifiers for tracking have been investigated. On-line learning allows for simple classifiers since only the current view of the object from its surrounding background needs to be discriminiated. However, on-line adaption faces one key problem: Each update of the tracker may introduce an error which, finally, can lead to tracking failure (drifting). The contribution of this paper is a novel on-line semi-supervised boosting method which significantly alleviates the drifting problem in tracking applications. This allows to limit the drifting problem while still staying adaptive to appearance changes. The main idea is to formulate the update process in a semi-supervised fashion as combined decision of a given prior and an on-line classifier. This comes without any parameter tuning. In the experiments, we demonstrate real-time tracking of our SemiBoost tracker on several challenging test sequences where our tracker outperforms other on-line tracking methods." ] }
1907.07613
2960281739
Template-matching methods for visual tracking have gained popularity recently due to their good performance and fast speed. However, they lack effective ways to adapt to changes in the target object's appearance, making their tracking accuracy still far from state-of-the-art. In this paper, we propose a dynamic memory network to adapt the template to the target's appearance variations during tracking. The reading and writing process of the external memory is controlled by an LSTM network with the search feature map as input. A spatial attention mechanism is applied to concentrate the LSTM input on the potential target as the location of the target is at first unknown. To prevent aggressive model adaptivity, we apply gated residual template learning to control the amount of retrieved memory that is used to combine with the initial template. In order to alleviate the drift problem, we also design a "negative" memory unit that stores templates for distractors, which are used to cancel out wrong responses from the object template. To further boost the tracking performance, an auxiliary classification loss is added after the feature extractor part. Unlike tracking-by-detection methods where the object's information is maintained by the weight parameters of neural networks, which requires expensive online fine-tuning to be adaptable, our tracker runs completely feed-forward and adapts to the target's appearance changes by updating the external memory. Moreover, the capacity of our model is not determined by the network size as with other trackers --- the capacity can be easily enlarged as the memory requirements of a task increase, which is favorable for memorizing long-term object information. Extensive experiments on the OTB and VOT datasets demonstrate that our trackers perform favorably against state-of-the-art tracking methods while retaining real-time speed.
With the widespread use of CNNs in the computer vision community, many methods @cite_1 have applied CNNs as the classifier to localize the target. @cite_49 uses two fully convolutional neural networks to estimate the target's bounding box, including a GNet that captures category information and an SNet that classifies the target from the background. @cite_29 presents a multi-domain learning framework to learn the shared representation of objects from different sequences. Motived by Dropout @cite_12 , BranchOut @cite_40 adopts multiple branches of fully connected layers, from which a random subset are selected for training, which regularizes the neural networks to avoid overfitting. Unlike these tracking-by-detection algorithms, which need costly stochastic gradient decent (SGD) updating, our method runs completely feed-forward and adapts to the object's appearance variations through a memory writing process, thus achieving real-time performance.
{ "cite_N": [ "@cite_29", "@cite_1", "@cite_40", "@cite_49", "@cite_12" ], "mid": [ "1857884451", "2767302379", "2737572441", "2211629196", "2095705004" ], "abstract": [ "We propose a novel visual tracking algorithm based on the representations from a discriminatively trained Convolutional Neural Network (CNN). Our algorithm pretrains a CNN using a large set of videos with tracking groundtruths to obtain a generic target representation. Our network is composed of shared layers and multiple branches of domain-specific layers, where domains correspond to individual training sequences and each branch is responsible for binary classification to identify target in each domain. We train each domain in the network iteratively to obtain generic target representations in the shared layers. When tracking a target in a new sequence, we construct a new network by combining the shared layers in the pretrained CNN with a new binary classification layer, which is updated online. Online tracking is performed by evaluating the candidate windows randomly sampled around the previous target state. The proposed algorithm illustrates outstanding performance in existing tracking benchmarks.", "Abstract Recently, deep learning has achieved great success in visual tracking. The goal of this paper is to review the state-of-the-art tracking methods based on deep learning. First, we introduce the background of deep visual tracking, including the fundamental concepts of visual tracking and related deep learning algorithms. Second, we categorize the existing deep-learning-based trackers into three classes according to network structure, network function and network training. For each categorize, we explain its analysis of the network perspective and analyze papers in different categories. Then, we conduct extensive experiments to compare the representative methods on the popular OTB-100, TC-128 and VOT2015 benchmarks. Based on our observations, we conclude that: (1) The usage of the convolutional neural network (CNN) model could significantly improve the tracking performance. (2) The trackers using the convolutional neural network (CNN) model to distinguish the tracked object from its surrounding background could get more accurate results, while using the CNN model for template matching is usually faster. (3) The trackers with deep features perform much better than those with low-level hand-crafted features. (4) Deep features from different convolutional layers have different characteristics and the effective combination of them usually results in a more robust tracker. (5) The deep visual trackers using end-to-end networks usually perform better than the trackers merely using feature extraction networks. (6) For visual tracking, the most suitable network training method is to per-train networks with video information and online fine-tune them with subsequent observations. Finally, we summarize our manuscript and highlight our insights, and point out the further trends for deep visual tracking.", "We propose an extremely simple but effective regularization technique of convolutional neural networks (CNNs), referred to as BranchOut, for online ensemble tracking. Our algorithm employs a CNN for target representation, which has a common convolutional layers but has multiple branches of fully connected layers. For better regularization, a subset of branches in the CNN are selected randomly for online learning whenever target appearance models need to be updated. Each branch may have a different number of layers to maintain variable abstraction levels of target appearances. BranchOut with multi-level target representation allows us to learn robust target appearance models with diversity and handle various challenges in visual tracking problem effectively. The proposed algorithm is evaluated in standard tracking benchmarks and shows the state-of-the-art performance even without additional pretraining on external tracking sequences.", "We propose a new approach for general object tracking with fully convolutional neural network. Instead of treating convolutional neural network (CNN) as a black-box feature extractor, we conduct in-depth study on the properties of CNN features offline pre-trained on massive image data and classification task on ImageNet. The discoveries motivate the design of our tracking system. It is found that convolutional layers in different levels characterize the target from different perspectives. A top layer encodes more semantic features and serves as a category detector, while a lower layer carries more discriminative information and can better separate the target from distracters with similar appearance. Both layers are jointly used with a switch mechanism during tracking. It is also found that for a tracking target, only a subset of neurons are relevant. A feature map selection method is developed to remove noisy and irrelevant feature maps, which can reduce computation redundancy and improve tracking accuracy. Extensive evaluation on the widely used tracking benchmark [36] shows that the proposed tacker outperforms the state-of-the-art significantly.", "Deep neural nets with a large number of parameters are very powerful machine learning systems. However, overfitting is a serious problem in such networks. Large networks are also slow to use, making it difficult to deal with overfitting by combining the predictions of many different large neural nets at test time. Dropout is a technique for addressing this problem. The key idea is to randomly drop units (along with their connections) from the neural network during training. This prevents units from co-adapting too much. During training, dropout samples from an exponential number of different \"thinned\" networks. At test time, it is easy to approximate the effect of averaging the predictions of all these thinned networks by simply using a single unthinned network that has smaller weights. This significantly reduces overfitting and gives major improvements over other regularization methods. We show that dropout improves the performance of neural networks on supervised learning tasks in vision, speech recognition, document classification and computational biology, obtaining state-of-the-art results on many benchmark data sets." ] }
1907.07613
2960281739
Template-matching methods for visual tracking have gained popularity recently due to their good performance and fast speed. However, they lack effective ways to adapt to changes in the target object's appearance, making their tracking accuracy still far from state-of-the-art. In this paper, we propose a dynamic memory network to adapt the template to the target's appearance variations during tracking. The reading and writing process of the external memory is controlled by an LSTM network with the search feature map as input. A spatial attention mechanism is applied to concentrate the LSTM input on the potential target as the location of the target is at first unknown. To prevent aggressive model adaptivity, we apply gated residual template learning to control the amount of retrieved memory that is used to combine with the initial template. In order to alleviate the drift problem, we also design a "negative" memory unit that stores templates for distractors, which are used to cancel out wrong responses from the object template. To further boost the tracking performance, an auxiliary classification loss is added after the feature extractor part. Unlike tracking-by-detection methods where the object's information is maintained by the weight parameters of neural networks, which requires expensive online fine-tuning to be adaptable, our tracker runs completely feed-forward and adapts to the target's appearance changes by updating the external memory. Moreover, the capacity of our model is not determined by the network size as with other trackers --- the capacity can be easily enlarged as the memory requirements of a task increase, which is favorable for memorizing long-term object information. Extensive experiments on the OTB and VOT datasets demonstrate that our trackers perform favorably against state-of-the-art tracking methods while retaining real-time speed.
Matching-based methods have recently gained popularity due to their fast speed and promising performance. The most notable is the fully convolutional Siamese network (SiamFC) @cite_58 . Although it only uses the first frame as the template, SiamFC achieves competitive results and fast speed. The key deficiency of SiamFC is that it lacks an effective model for online updating. To address this, @cite_30 updates the model using linear interpolation of new templates with a small learning rate, but only sees modest improvements in accuracy. RFL (Recurrent Filter Learning) @cite_18 adopts a convolutional LSTM for model updating, where the forget and input gates control the linear combination of the historical target information (, memory states of the LSTM) and the object's current template automatically. Guo @cite_10 propose a dynamic Siamese network with two general transformations for target appearance variation and background suppression. He @cite_64 design two branches of Siamese networks with a channel-wise attention mechanism aiming to improve the robustness and discrimination ability of the matching network.
{ "cite_N": [ "@cite_30", "@cite_18", "@cite_64", "@cite_58", "@cite_10" ], "mid": [ "2962824803", "2963873961", "2963854930", "2470394683", "2776035257" ], "abstract": [ "The Correlation Filter is an algorithm that trains a linear template to discriminate between images and their translations. It is well suited to object tracking because its formulation in the Fourier domain provides a fast solution, enabling the detector to be re-trained once per frame. Previous works that use the Correlation Filter, however, have adopted features that were either manually designed or trained for a different task. This work is the first to overcome this limitation by interpreting the Correlation Filter learner, which has a closed-form solution, as a differentiable layer in a deep neural network. This enables learning deep features that are tightly coupled to the Correlation Filter. Experiments illustrate that our method has the important practical benefit of allowing lightweight architectures to achieve state-of-the-art performance at high framerates.", "Recently using convolutional neural networks (CNNs) has gained popularity in visual tracking, due to its robust feature representation of images. Recent methods perform online tracking by fine-tuning a pre-trained CNN model to the specific target object using stochastic gradient descent (SGD) back-propagation, which is usually time-consuming. In this paper, we propose a recurrent filter generation methods for visual tracking. We directly feed the target's image patch to a recurrent neural network (RNN) to estimate an object-specific filter for tracking. As the video sequence is a spatiotemporal data, we extend the matrix multiplications of the fully-connected layers of the RNN to a convolution operation on feature maps, which preserves the target's spatial structure and also is memory-efficient. The tracked object in the subsequent frames will be fed into the RNN to adapt the generated filters to appearance variations of the target. Note that once the off-line training process of our network is finished, there is no need to fine-tune the network for specific objects, which makes our approach more efficient than methods that use iterative fine-tuning to online learn the target. Extensive experiments conducted on widely used benchmarks, OTB and VOT, demonstrate encouraging results compared to other recent methods.", "Observing that Semantic features learned in an image classification task and Appearance features learned in a similarity matching task complement each other, we build a twofold Siamese network, named SA-Siam, for real-time object tracking. SA-Siam is composed of a semantic branch and an appearance branch. Each branch is a similaritylearning Siamese network. An important design choice in SA-Siam is to separately train the two branches to keep the heterogeneity of the two types of features. In addition, we propose a channel attention mechanism for the semantic branch. Channel-wise weights are computed according to the channel activations around the target position. While the inherited architecture from SiamFC [3] allows our tracker to operate beyond real-time, the twofold design and the attention mechanism significantly improve the tracking performance. The proposed SA-Siam outperforms all other real-time trackers by a large margin on OTB-2013 50 100 benchmarks.", "The problem of arbitrary object tracking has traditionally been tackled by learning a model of the object’s appearance exclusively online, using as sole training data the video itself. Despite the success of these methods, their online-only approach inherently limits the richness of the model they can learn. Recently, several attempts have been made to exploit the expressive power of deep convolutional networks. However, when the object to track is not known beforehand, it is necessary to perform Stochastic Gradient Descent online to adapt the weights of the network, severely compromising the speed of the system. In this paper we equip a basic tracking algorithm with a novel fully-convolutional Siamese network trained end-to-end on the ILSVRC15 dataset for object detection in video. Our tracker operates at frame-rates beyond real-time and, despite its extreme simplicity, achieves state-of-the-art performance in multiple benchmarks.", "How to effectively learn temporal variation of target appearance, to exclude the interference of cluttered background, while maintaining real-time response, is an essential problem of visual object tracking. Recently, Siamese networks have shown great potentials of matching based trackers in achieving balanced accuracy and beyond realtime speed. However, they still have a big gap to classification & updating based trackers in tolerating the temporal changes of objects and imaging conditions. In this paper, we propose dynamic Siamese network, via a fast transformation learning model that enables effective online learning of target appearance variation and background suppression from previous frames. We then present elementwise multi-layer fusion to adaptively integrate the network outputs using multi-level deep features. Unlike state-of-theart trackers, our approach allows the usage of any feasible generally- or particularly-trained features, such as SiamFC and VGG. More importantly, the proposed dynamic Siamese network can be jointly trained as a whole directly on the labeled video sequences, thus can take full advantage of the rich spatial temporal information of moving objects. As a result, our approach achieves state-of-the-art performance on OTB-2013 and VOT-2015 benchmarks, while exhibits superiorly balanced accuracy and real-time response over state-of-the-art competitors." ] }
1907.07613
2960281739
Template-matching methods for visual tracking have gained popularity recently due to their good performance and fast speed. However, they lack effective ways to adapt to changes in the target object's appearance, making their tracking accuracy still far from state-of-the-art. In this paper, we propose a dynamic memory network to adapt the template to the target's appearance variations during tracking. The reading and writing process of the external memory is controlled by an LSTM network with the search feature map as input. A spatial attention mechanism is applied to concentrate the LSTM input on the potential target as the location of the target is at first unknown. To prevent aggressive model adaptivity, we apply gated residual template learning to control the amount of retrieved memory that is used to combine with the initial template. In order to alleviate the drift problem, we also design a "negative" memory unit that stores templates for distractors, which are used to cancel out wrong responses from the object template. To further boost the tracking performance, an auxiliary classification loss is added after the feature extractor part. Unlike tracking-by-detection methods where the object's information is maintained by the weight parameters of neural networks, which requires expensive online fine-tuning to be adaptable, our tracker runs completely feed-forward and adapts to the target's appearance changes by updating the external memory. Moreover, the capacity of our model is not determined by the network size as with other trackers --- the capacity can be easily enlarged as the memory requirements of a task increase, which is favorable for memorizing long-term object information. Extensive experiments on the OTB and VOT datasets demonstrate that our trackers perform favorably against state-of-the-art tracking methods while retaining real-time speed.
To further improve the speed of SiamFC, @cite_55 reduces the feature computation cost for easy frames, by using deep reinforcement learning to train policies for early stopping the feed-forward calculations of the CNN when the response confidence is high enough. SINT @cite_43 also uses Siamese networks for visual tracking and has higher accuracy, but runs much slower than SiamFC (2 fps vs 86 fps) due to the use of a deeper CNN (VGG16) for feature extraction, and optical flow for its candidate sampling strategy. @cite_15 proposes a dual deep network by exploiting hierarchical features of CNN layers for object tracking. Unlike other template-matching models that use sliding windows or random sampling to generate candidate image patches for testing, GOTURN @cite_65 directly regresses the coordinates of the target's bounding box by comparing the previous and current image patches. Despite its fast speed and advantage on handling scale and aspect ratio changes , its tracking accuracy is much lower than other state-of-the-art trackers.
{ "cite_N": [ "@cite_43", "@cite_55", "@cite_65", "@cite_15" ], "mid": [ "2408241409", "2963791342", "2964253307", "2562852216" ], "abstract": [ "In this paper we present a tracker, which is radically different from state-of-the-art trackers: we apply no model updating, no occlusion detection, no combination of trackers, no geometric matching, and still deliver state-of-theart tracking performance, as demonstrated on the popular online tracking benchmark (OTB) and six very challenging YouTube videos. The presented tracker simply matches the initial patch of the target in the first frame with candidates in a new frame and returns the most similar patch by a learned matching function. The strength of the matching function comes from being extensively trained generically, i.e., without any data of the target, using a Siamese deep neural network, which we design for tracking. Once learned, the matching function is used as is, without any adapting, to track previously unseen targets. It turns out that the learned matching function is so powerful that a simple tracker built upon it, coined Siamese INstance search Tracker, SINT, which only uses the original observation of the target from the first frame, suffices to reach state-of-theart performance. Further, we show the proposed tracker even allows for target re-identification after the target was absent for a complete video shot.", "Visual object tracking is a fundamental and time-critical vision task. Recent years have seen many shallow tracking methods based on real-time pixel-based correlation filters, as well as deep methods that have top performance but need a high-end GPU. In this paper, we learn to improve the speed of deep trackers without losing accuracy. Our fundamental insight is to take an adaptive approach, where easy frames are processed with cheap features (such as pixel values), while challenging frames are processed with invariant but expensive deep features. We formulate the adaptive tracking problem as a decision-making process, and learn an agent to decide whether to locate objects with high confidence on an early layer, or continue processing subsequent layers of a network. This significantly reduces the feedforward cost for easy frames with distinct or slow-moving objects. We train the agent offline in a reinforcement learning fashion, and further demonstrate that learning all deep layers (so as to provide good features for adaptive tracking) can lead to near real-time average tracking speed of 23 fps on a single CPU while achieving state-of-the-art performance. Perhaps most tellingly, our approach provides a 100X speedup for almost 50 of the time, indicating the power of an adaptive approach.", "Machine learning techniques are often used in computer vision due to their ability to leverage large amounts of training data to improve performance. Unfortunately, most generic object trackers are still trained from scratch online and do not benefit from the large number of videos that are readily available for offline training. We propose a method for offline training of neural networks that can track novel objects at test-time at 100 fps. Our tracker is significantly faster than previous methods that use neural networks for tracking, which are typically very slow to run and not practical for real-time applications. Our tracker uses a simple feed-forward network with no online training required. The tracker learns a generic relationship between object motion and appearance and can be used to track novel objects that do not appear in the training set. We test our network on a standard tracking benchmark to demonstrate our tracker’s state-of-the-art performance. Further, our performance improves as we add more videos to our offline training set. To the best of our knowledge, our tracker (Our tracker is available at http: davheld.github.io GOTURN GOTURN.html) is the first neural-network tracker that learns to track generic objects at 100 fps.", "Visual tracking addresses the problem of identifying and localizing an unknown target in a video given the target specified by a bounding box in the first frame. In this paper, we propose a dual network to better utilize features among layers for visual tracking. It is observed that features in higher layers encode semantic context while its counterparts in lower layers are sensitive to discriminative appearance. Thus we exploit the hierarchical features in different layers of a deep model and design a dual structure to obtain better feature representation from various streams, which is rarely investigated in previous work. To highlight geometric contours of the target, we integrate the hierarchical feature maps with an edge detector as the coarse prior maps to further embed local details around the target. To leverage the robustness of our dual network, we train it with random patches measuring the similarities between the network activation and target appearance, which serves as a regularization to enforce the dual network to focus on target object. The proposed dual network is updated online in a unique manner based on the observation, that the target being tracked in consecutive frames should share more similar feature representations than those in the surrounding background. It is also found that for a target object, the prior maps can help further enhance performance by passing message into the output maps of the dual network. Therefore, an independent component analysis with reference algorithm is employed to extract target context using prior maps as guidance. Online tracking is conducted by maximizing the posterior estimate on the final maps with stochastic and periodic update. Quantitative and qualitative evaluations on two large-scale benchmark data sets show that the proposed algorithm performs favorably against the state-of-the-arts." ] }
1907.07613
2960281739
Template-matching methods for visual tracking have gained popularity recently due to their good performance and fast speed. However, they lack effective ways to adapt to changes in the target object's appearance, making their tracking accuracy still far from state-of-the-art. In this paper, we propose a dynamic memory network to adapt the template to the target's appearance variations during tracking. The reading and writing process of the external memory is controlled by an LSTM network with the search feature map as input. A spatial attention mechanism is applied to concentrate the LSTM input on the potential target as the location of the target is at first unknown. To prevent aggressive model adaptivity, we apply gated residual template learning to control the amount of retrieved memory that is used to combine with the initial template. In order to alleviate the drift problem, we also design a "negative" memory unit that stores templates for distractors, which are used to cancel out wrong responses from the object template. To further boost the tracking performance, an auxiliary classification loss is added after the feature extractor part. Unlike tracking-by-detection methods where the object's information is maintained by the weight parameters of neural networks, which requires expensive online fine-tuning to be adaptable, our tracker runs completely feed-forward and adapts to the target's appearance changes by updating the external memory. Moreover, the capacity of our model is not determined by the network size as with other trackers --- the capacity can be easily enlarged as the memory requirements of a task increase, which is favorable for memorizing long-term object information. Extensive experiments on the OTB and VOT datasets demonstrate that our trackers perform favorably against state-of-the-art tracking methods while retaining real-time speed.
Multi-task learning has been successfully used in many applications of machine learning, ranging from natural language processing @cite_37 and speech recognition @cite_81 to computer vision @cite_39 . @cite_70 estimates the street direction in an autonomous driving car by predicting various characteristics of the road, which serves as an auxiliary task. @cite_44 introduces auxiliary tasks of estimating head pose and facial attributes to boost the performance of facial landmark detection, while @cite_7 boosted the performance of a human pose estimation network by adding human joint detectors as auxiliary tasks. Recent works combining object detection and semantic segmentation @cite_45 @cite_77 , as well as image depth estimation and semantic segmentation @cite_83 @cite_53 , also demonstrate the effectiveness of multi-task learning on improving the generalization ability of neural networks. Observing that the CNN learned for object similarity matching lacks the generalization ability of invariance to appearance variations, we propose to add an auxiliary task, object classification, to regularize the CNN so that it learns object semantics.
{ "cite_N": [ "@cite_37", "@cite_7", "@cite_70", "@cite_53", "@cite_39", "@cite_44", "@cite_77", "@cite_81", "@cite_45", "@cite_83" ], "mid": [ "2117130368", "2052678124", "", "", "", "1648933886", "", "2091432990", "2137881638", "1905829557" ], "abstract": [ "We describe a single convolutional neural network architecture that, given a sentence, outputs a host of language processing predictions: part-of-speech tags, chunks, named entity tags, semantic roles, semantically similar words and the likelihood that the sentence makes sense (grammatically and semantically) using a language model. The entire network is trained jointly on all these tasks using weight-sharing, an instance of multitask learning. All the tasks use labeled data except the language model which is learnt from unlabeled text and represents a novel form of semi-supervised learning for the shared tasks. We show how both multitask learning and semi-supervised learning improve the generalization of the shared tasks, resulting in state-of-the-art-performance.", "We propose a heterogeneous multi-task learning framework for human pose estimation from monocular images using a deep convolutional neural network. In particular, we simultaneously learn a human pose regressor and sliding-window body-part and joint-point detectors in a deep network architecture. We show that including the detection tasks helps to regularize the network, directing it to converge to a good solution. We report competitive and state-of-art results on several datasets. We also empirically show that the learned neurons in the middle layer of our network are tuned to localized body parts.", "", "", "", "Multi-task learning is a learning paradigm which seeks to improve the generalization performance of a learning task with the help of some other related tasks. In this paper, we propose a regularization formulation for learning the relationships between tasks in multi-task learning. This formulation can be viewed as a novel generalization of the regularization framework for single-task learning. Besides modeling positive task correlation, our method, called multi-task relationship learning (MTRL), can also describe negative task correlation and identify outlier tasks based on the same underlying principle. Under this regularization framework, the objective function of MTRL is convex. For efficiency, we use an alternating method to learn the optimal model parameters for each task as well as the relationships between tasks. We study MTRL in the symmetric multi-task learning setting and then generalize it to the asymmetric setting as well. We also study the relationships between MTRL and some existing multi-task learning methods. Experiments conducted on a toy problem as well as several benchmark data sets demonstrate the effectiveness of MTRL.", "", "In this paper, we provide an overview of the invited and contributed papers presented at the special session at ICASSP-2013, entitled “New Types of Deep Neural Network Learning for Speech Recognition and Related Applications,” as organized by the authors. We also describe the historical context in which acoustic models based on deep neural networks have been developed. The technical overview of the papers presented in our special session is organized into five ways of improving deep learning methods: (1) better optimization; (2) better types of neural activation function and better network architectures; (3) better ways to determine the myriad hyper-parameters of deep neural networks; (4) more appropriate ways to preprocess speech for deep neural networks; and (5) ways of leveraging multiple languages or dialects that are more easily achieved with deep neural networks than with Gaussian mixture models.", "In this paper we propose an approach to holistic scene understanding that reasons jointly about regions, location, class and spatial extent of objects, presence of a class in the image, as well as the scene type. Learning and inference in our model are efficient as we reason at the segment level, and introduce auxiliary variables that allow us to decompose the inherent high-order potentials into pairwise potentials between a few variables with small number of states (at most the number of classes). Inference is done via a convergent message-passing algorithm, which, unlike graph-cuts inference, has no submodularity restrictions and does not require potential specific moves. We believe this is very important, as it allows us to encode our ideas and prior knowledge about the problem without the need to change the inference engine every time we introduce a new potential. Our approach outperforms the state-of-the-art on the MSRC-21 benchmark, while being much faster. Importantly, our holistic model is able to improve performance in all tasks.", "In this paper we address three different computer vision tasks using a single basic architecture: depth prediction, surface normal estimation, and semantic labeling. We use a multiscale convolutional network that is able to adapt easily to each task using only small modifications, regressing from the input image to the output map directly. Our method progressively refines predictions using a sequence of scales, and captures many image details without any superpixels or low-level segmentation. We achieve state-of-the-art performance on benchmarks for all three tasks." ] }
1907.07647
2960675232
Particle Swarm Optimisation (PSO) is a powerful optimisation algorithm that can be used to locate global maxima in a search space. Recent interest in swarms of Micro Aerial Vehicles (MAVs) begs the question as to whether PSO can be used as a method to enable real robotic swarms to locate a target goal point. However, the original PSO algorithm does not take into account collisions between particles during search. In this paper we propose a novel algorithm called Force Field Particle Swarm Optimisation (FFPSO) that designates repellent force fields to particles such that these fields provide an additional velocity component into the original PSO equations. We compare the performance of FFPSO with PSO and show that it has the ability to reduce the number of particle collisions during search to 0 whilst also being able to locate a target of interest in a similar amount of time. The scalability of the algorithm is also demonstrated via a set of experiments that considers how the number of crashes and the time taken to find the goal varies according to swarm size. Finally, we demonstrate the algorithms applicability on a swarm of real MAVs.
We begin by reviewing the most related work in the area of PSO, Potential Field methods and Flocking. PSO itself is a vast field with applications in many different areas (see @cite_22 for details), our aim here is not to cover the entirety of this but only what is relevant to aerial and swarm robotics. We also review some relevant work in the area of Potential Field methods, which are almost identical in nature to the force field" used in this work. However, we feel the alternative name is more appropriate in our work due to the 3-dimensional and finite nature of our fields acting around aerial robots. Finally, we review similar collision avoidance strategies employed in flocking algorithms.
{ "cite_N": [ "@cite_22" ], "mid": [ "1859314164" ], "abstract": [ "Particle swarm optimization (PSO) is a heuristic global optimization method, proposed originally by Kennedy and Eberhart in 1995. It is now one of the most commonly used optimization techniques. This survey presented a comprehensive investigation of PSO. On one hand, we provided advances with PSO, including its modifications (including quantum-behaved PSO, bare-bones PSO, chaotic PSO, and fuzzy PSO), population topology (as fully connected, von Neumann, ring, star, random, etc.), hybridization (with genetic algorithm, simulated annealing, Tabu search, artificial immune system, ant colony algorithm, artificial bee colony, differential evolution, harmonic search, and biogeography-based optimization), extensions (to multiobjective, constrained, discrete, and binary optimization), theoretical analysis (parameter selection and tuning, and convergence analysis), and parallel implementation (in multicore, multiprocessor, GPU, and cloud computing forms). On the other hand, we offered a survey on applications of PSO to the following eight fields: electrical and electronic engineering, automation control systems, communication theory, operations research, mechanical engineering, fuel and energy, medicine, chemistry, and biology. It is hoped that this survey would be beneficial for the researchers studying PSO algorithms." ] }
1907.07647
2960675232
Particle Swarm Optimisation (PSO) is a powerful optimisation algorithm that can be used to locate global maxima in a search space. Recent interest in swarms of Micro Aerial Vehicles (MAVs) begs the question as to whether PSO can be used as a method to enable real robotic swarms to locate a target goal point. However, the original PSO algorithm does not take into account collisions between particles during search. In this paper we propose a novel algorithm called Force Field Particle Swarm Optimisation (FFPSO) that designates repellent force fields to particles such that these fields provide an additional velocity component into the original PSO equations. We compare the performance of FFPSO with PSO and show that it has the ability to reduce the number of particle collisions during search to 0 whilst also being able to locate a target of interest in a similar amount of time. The scalability of the algorithm is also demonstrated via a set of experiments that considers how the number of crashes and the time taken to find the goal varies according to swarm size. Finally, we demonstrate the algorithms applicability on a swarm of real MAVs.
PSO has been applied to Unmanned Aerial Vehicles (UAVs) and MAVs in various ways already. Optimal route planning for MAVs is an optimisation problem that is tackled in @cite_19 @cite_1 by constructing complex fitness functions consisting of a number of different metrics that would affect the success of an MAV carrying out reconnaissance missions. These works modify the fitness function, whereas our work modifies the PSO equation directly. In the case of complicated fitness functions, the computational requirements of evaluating them for each individual at each time step could be far greater than our proposed method. In @cite_6 @cite_9 , PSO is used to tune the parameters of a PID controller for an AR.Drone by constructing a multi-objective fitness function that takes into account a number of performance metrics w.r.t. the PID controller. In @cite_16 , PSO is hybridised with a Genetic Algorithm (GA) in order to optimise formation reconfiguration in swarms of UAVs. A hybrid algorithm is proposed that combines the advantages of both optimisation methods and is shown to outperform PSO in a series of simulated experiments. This algorithm optimises the control inputs of the UAVs such that optimal swarm reconfiguration can be achieved in battle-like simulations.
{ "cite_N": [ "@cite_9", "@cite_1", "@cite_6", "@cite_19", "@cite_16" ], "mid": [ "836179420", "", "2471940543", "2053726668", "2023086460" ], "abstract": [ "This work presents a method to build a robust controller for a hose transportation system performed by aerial robots. We provide the system dynamic model, equations and desired equilibrium criteria. Control is obtained through PID controllers tuned by particle swarm optimization (PSO). The control strategy is illustrated for three quadrotors carrying two sections of a hose, but the model can be easily expanded to a bigger number of quadrotors system, due to the approach modularity. Experiments demonstrate the PSO tuning method convergence, which is fast. More than one solution is possible, and control is very robust.", "", "In this paper, a proposed particle swarm optimization called multi-objective particle swarm optimization (MOPSO) with an accelerated update methodology is employed to tune Proportional-Integral-Derivative (PID) controller for an AR.Drone quadrotor. The proposed approach is to modify the velocity formula of the general PSO systems in order for improving the searching efficiency and actual execution time. Three PID control parameters, i.e., the proportional gain Kp, integral gain K; and derivative gain Kd are required to form a parameter vector which is considered as a particle of PSO. To derive the optimal PID parameters for the Ar.Drone, the modified update method is employed to move the positions of all particles in the population. In the meanwhile, multi-objective functions defined for PID controller optimization problems are minimized. The results verify that the proposed MOPSO is able to perform appropriately in Ar.Drone control system.", "Aiming at solving the problems of unsatisfactory routes planning that are suboptimal to optimal routes planning and dissatisfactory real time routes planning and routes planning of multiple unmanned aerial vehicles (UAVs) cooperation, a cost function of multiple UAVs routes planning is presented after modeling the primary factors influencing on multiple UAVs route planning. Then, Based on particle swarm optimization (PSO) algorithm, this paper presents the approaches of static three-dimensional routes planning and dynamic three-dimensional routes planning for multiple UAVs cooperation. The approaches can also be used to plan a single UAV route. Using these approaches, an emluator is designed and some experiments are made, and the results demonstrate that the approaches can satisfy the requests of the multiple UAVs routes planning.", "The initial state of an Unmanned Aerial Vehicle (UAV) system and the relative state of the system, the continuous inputs of each flight unit are piecewise linear by a Control Parameterization and Time Discretization (CPTD) method. The approximation piecewise linearization control inputs are used to substitute for the continuous inputs. In this way, the multi-UAV formation reconfiguration problem can be formulated as an optimal control problem with dynamical and algebraic constraints. With strict constraints and mutual interference, the multi-UAV formation reconfiguration in 3-D space is a complicated problem. The recent boom of bio-inspired algorithms has attracted many researchers to the field of applying such intelligent approaches to complicated optimization problems in multi-UAVs. In this paper, a Hybrid Particle Swarm Optimization and Genetic Algorithm (HPSOGA) is proposed to solve the multi-UAV formation reconfiguration problem, which is modeled as a parameter optimization problem. This new approach combines the advantages of Particle Swarm Optimization (PSO) and Genetic Algorithm (GA), which can find the time-optimal solutions simultaneously. The proposed HPSOGA will also be compared with basic PSO algorithm and the series of experimental results will show that our HPSOGA outperforms PSO in solving multi-UAV formation reconfiguration problem under complicated environments." ] }
1907.07647
2960675232
Particle Swarm Optimisation (PSO) is a powerful optimisation algorithm that can be used to locate global maxima in a search space. Recent interest in swarms of Micro Aerial Vehicles (MAVs) begs the question as to whether PSO can be used as a method to enable real robotic swarms to locate a target goal point. However, the original PSO algorithm does not take into account collisions between particles during search. In this paper we propose a novel algorithm called Force Field Particle Swarm Optimisation (FFPSO) that designates repellent force fields to particles such that these fields provide an additional velocity component into the original PSO equations. We compare the performance of FFPSO with PSO and show that it has the ability to reduce the number of particle collisions during search to 0 whilst also being able to locate a target of interest in a similar amount of time. The scalability of the algorithm is also demonstrated via a set of experiments that considers how the number of crashes and the time taken to find the goal varies according to swarm size. Finally, we demonstrate the algorithms applicability on a swarm of real MAVs.
The work most related to ours in theoretical approach is @cite_18 . In this work each individual ePuck robot represents a particle in the PSO algorithm where the aim is to find an area of interest. However, the main contributions of our work compared to @cite_18 is that we extend this model to 3 dimensions for aerial vehicles and we show our algorithm operating on a real swarm, whereas @cite_18 only tests the algorithm in simulation. Similar to @cite_7 , @cite_18 employs a simple Braitenburg collision avoidance scheme in which particles instantaneously move in opposite directions after a collision and then continue to follow the original velocity before the collision occurred.
{ "cite_N": [ "@cite_18", "@cite_7" ], "mid": [ "2112571153", "2783023810" ], "abstract": [ "Within the field of multi-robot systems, multi-robot search is one area which is currently receiving a lot of research attention. One major challenge within this area is to design effective algorithms that allow a team of robots to work together to find their targets. Techniques have been adopted for multi-robot search from the particle swarm optimization algorithm, which uses a virtual multi-agent search to find optima in a multi-dimensional function space. We present here a multi-search algorithm inspired by particle swarm optimization. Additionally, we exploit this inspiration by modifying the particle swarm optimization algorithm to mimic the multi-robot search process, thereby allowing us to model at an abstracted level the effects of changing aspects and parameters of the system such as number of robots and communication range", "This work proposes a method for moving a swarm of autonomous Unmanned Aerial Vehicles to accomplish an specific task. The approach uses a centralized strategy which considers a trajectory calculation and collision avoidance. The solution was implemented in a simulated scenario as well as in a real controlled environment using a swarm of nano drones, together with a setup supported by a motion capture system. The solution was tested while planting virtual seeds in a field composed by a grid of points that represent the places to be sown. Experiments were performed for measuring completion times and attempts to prevent impacts in order to test the effectiveness, scalability and stability of the solution as well as the robustness of the collision avoidance algorithm while increasing the number of agents to perform the task." ] }
1907.07377
2959120033
A Controller Area Network (CAN) bus in the vehicles is an efficient standard bus enabling communication between all Electronic Control Units (ECU). However, CAN bus is not enough to protect itself because of lack of security features. To detect suspicious network connections effectively, the intrusion detection system (IDS) is strongly required. Unlike the traditional IDS for Internet, there are small number of known attack signatures for vehicle networks. Also, IDS for vehicle requires high accuracy because any false-positive error can seriously affect the safety of the driver. To solve this problem, we propose a novel IDS model for in-vehicle networks, GIDS (GAN based Intrusion Detection System) using deep-learning model, Generative Adversarial Nets. GIDS can learn to detect unknown attacks using only normal data. As experiment result, GIDS shows high detection accuracy for four unknown attacks.
The early research for anomaly detection of the in-vehicle system was introduced by Hoppe @cite_3 . He presented three selected characteristics as patterns available for anomaly detection that include the recognition of an increased frequency of cyclic CAN messages, the observation of low-level communication characteristics, and the identification of obvious misuse of message IDs. M "u ter proposed an anomaly detection based entropy @cite_0 . Marchetti analyzed and identified anomalies in the sequence of CAN @cite_6 . The proposed model features low memory and computational footprints. SALMAN proposed a software-based light-weight IDS and two anomaly-based algorithms based on message cycle time analysis and plausibility analysis of messages @cite_2 . It contributed to more advanced research in the field of IDS for in-vehicle networks.
{ "cite_N": [ "@cite_0", "@cite_6", "@cite_3", "@cite_2" ], "mid": [ "2584565207", "2739928414", "2148974412", "2756382106" ], "abstract": [ "", "This paper proposes a novel intrusion detection algorithm that aims to identify malicious CAN messages injected by attackers in the CAN bus of modern vehicles. The proposed algorithm identifies anomalies in the sequence of messages that flow in the CAN bus and is characterized by small memory and computational footprints, that make it applicable to current ECUs. Its detection performance are demonstrated through experiments carried out on real CAN traffic gathered from an unmodified licensed vehicle.", "The IT security of automotive systems is an evolving area of research. To analyse the current situation we performed several practical tests on recent automotive technology, focusing on automotive systems based on CAN bus technology. With respect to the results of these tests, in this paper we discuss selected countermeasures to address the basic weaknesses exploited in our tests and also give a short outlook to requirements, potential and restrictions of future, holistic approaches.", "The Controller Area Network (CAN) was specified with no regards to security mechanisms at all. This fact in combination with the widespread adoption of the CAN standard for connecting more than a hundred Electrical Control Units (ECUs), which control almost every aspect of modern cars, makes the CAN bus a valuable target for adversaries. As vehicles are safety-critical systems and the physical integrity of the driver has the highest priority, it is necessary to invent suitable countermeasures to limit CAN’s security risks. As a matter of fact, the close resemblances of in-vehicle networks to traditional computer networks, enables the use of conventional countermeasures, e.g. Intrusion Detection Systems (IDS). We propose a software-based light-weight IDS relying on properties extracted from the signal database of a CAN domain. Further, we suggest two anomaly-based algorithms based on message cycle time analysis and plausibility analysis of messages (e.g. speed messages). We evaluate our IDS on a simulated setup, as well as a real in-vehicle network, by performing attacks on different parts of the network. Our evaluation shows that the proposed IDS successfully detects malicious events such as injection of malformed CAN frames, unauthorized CAN frames, speedometer plausibility detection and Denial of Service (DoS) attacks. Based on our experience of implementing an in-vehicle IDS, we discuss potential challenges and constraints that engineers might face during the process of implementing an IDS system for in-vehicle networks. We believe that the results of this work can contribute to more advanced research in the field of intrusion detection systems for in-vehicle networks and thereby add to a safer driving experience." ] }
1907.07377
2959120033
A Controller Area Network (CAN) bus in the vehicles is an efficient standard bus enabling communication between all Electronic Control Units (ECU). However, CAN bus is not enough to protect itself because of lack of security features. To detect suspicious network connections effectively, the intrusion detection system (IDS) is strongly required. Unlike the traditional IDS for Internet, there are small number of known attack signatures for vehicle networks. Also, IDS for vehicle requires high accuracy because any false-positive error can seriously affect the safety of the driver. To solve this problem, we propose a novel IDS model for in-vehicle networks, GIDS (GAN based Intrusion Detection System) using deep-learning model, Generative Adversarial Nets. GIDS can learn to detect unknown attacks using only normal data. As experiment result, GIDS shows high detection accuracy for four unknown attacks.
Many security research in various fields has adopted deep-learning methods for IDS. For example, Zhang presented a deep-learning method to detect Web attacks by using the specially designed CNN @cite_7 . The method is based on analyzing the HTTP request packets, to which only some preprocessing is needed whereas the tedious feature extraction is done by the CNN itself. Recently, Generative Adversarial Nets (GAN) was adopted to not only image generation but also other research like anomaly detection. Schlegl proposed AnoGAN, a deep convolutional generative adversarial network to learn a manifold of normal anatomical variability. The model demonstrated that the approach correctly identifies anomalous images, such as images containing retinal fluid @cite_4 .
{ "cite_N": [ "@cite_4", "@cite_7" ], "mid": [ "2599354622", "2765438277" ], "abstract": [ "Obtaining models that capture imaging markers relevant for disease progression and treatment monitoring is challenging. Models are typically based on large amounts of data with annotated examples of known markers aiming at automating detection. High annotation effort and the limitation to a vocabulary of known markers limit the power of such approaches. Here, we perform unsupervised learning to identify anomalies in imaging data as candidates for markers. We propose AnoGAN, a deep convolutional generative adversarial network to learn a manifold of normal anatomical variability, accompanying a novel anomaly scoring scheme based on the mapping from image space to a latent space. Applied to new data, the model labels anomalies, and scores image patches indicating their fit into the learned distribution. Results on optical coherence tomography images of the retina demonstrate that the approach correctly identifies anomalous images, such as images containing retinal fluid or hyperreflective foci.", "With the increasing information sharing and other activities conducted on the World Wide Web, the Web has become the main venue for attackers to make troubles. The effective methods to detect Web attacks are critical and significant to guarantee the Web security. In recent years, many machine learning methods have been applied to detect Web attacks. We present a deep learning method to detect Web attacks by using a specially designed CNN. The method is based on analyzing the HTTP request packets, to which only some preprocessing is needed whereas the tedious feature extraction is done by the CNN itself. The experimental results on dataset HTTP DATASET CSIC 2010 show that the designed CNN has a good performance and the method achieves satisfactory results in detecting Web attacks, having a high detection rate while keeping a low false alarm rate." ] }
1907.07202
2959373581
Human gaze is known to be a strong indicator of underlying human intentions and goals during manipulation tasks. This work studies gaze patterns of human teachers demonstrating tasks to robots and proposes ways in which such patterns can be used to enhance robot learning. Using both kinesthetic teaching and video demonstrations, we identify novel intention-revealing gaze behaviors during teaching. These prove to be informative in a variety of problems ranging from reference frame inference to segmentation of multi-step tasks. Based on our findings, we propose two proof-of-concept algorithms which show that gaze data can enhance subtask classification for a multi-step task up to 6 and reward inference and policy learning for a single-step task up to 67 . Our findings provide a foundation for a model of natural human gaze in robot learning from demonstration settings and present open problems for utilizing human gaze to enhance robot learning.
There is also a rich body of work on eye gaze for human-robot interaction @cite_9 . use nonverbal cues including gaze to study timing coordination between humans and robots. Gaze information has also been shown to enable the establishment of joint attention between the human and robot partner, the recognition of human behavior and the execution of anticipatory actions @cite_9 . However, these prior works focus on gaze cues generated by the robot and not on gaze cues from humans. More recently, studied human gaze behavior for shared manipulation, where users controlled a robot arm mounted on a wheelchair via a joystick for assistive tasks of daily living. Novel patterns of gaze behaviors were identified, such as people using visual feedback for aligning the robot arm in a certain orientation and cognitive load being higher for teleoperation versus the shared autonomy condition. However, eye gaze behavior of human teachers has not been studied in the context of robot learning from demonstrations.
{ "cite_N": [ "@cite_9" ], "mid": [ "2617211984" ], "abstract": [ "This article reviews the state of the art in social eye gaze for human-robot interaction (HRI). It establishes three categories of gaze research in HRI, defined by differences in goals and methods: a human-centered approach, which focuses on people's responses to gaze; a design-centered approach, which addresses the features of robot gaze behavior and appearance that improve interaction; and a technology-centered approach, which is concentrated on the computational tools for implementing social eye gaze in robots. This paper begins with background information about gaze research in HRI and ends with a set of open questions." ] }
1907.07202
2959373581
Human gaze is known to be a strong indicator of underlying human intentions and goals during manipulation tasks. This work studies gaze patterns of human teachers demonstrating tasks to robots and proposes ways in which such patterns can be used to enhance robot learning. Using both kinesthetic teaching and video demonstrations, we identify novel intention-revealing gaze behaviors during teaching. These prove to be informative in a variety of problems ranging from reference frame inference to segmentation of multi-step tasks. Based on our findings, we propose two proof-of-concept algorithms which show that gaze data can enhance subtask classification for a multi-step task up to 6 and reward inference and policy learning for a single-step task up to 67 . Our findings provide a foundation for a model of natural human gaze in robot learning from demonstration settings and present open problems for utilizing human gaze to enhance robot learning.
There has also been some recent work on utilizing human eye gaze for learning algorithms. used demonstrations from a person wearing an eye tracking hardware along with an egocentric camera to simultaneously ground symbols to their instances in the environment and learn the appearance of such object instances. use gaze information as a heuristic to compute a prior distribution of the goal location for reaching motions in a manipulation task. This allows for efficient inference of a multiple-model filtering approach for early intention recognition of reaching actions by pruning model-matching filters that need to be run in parallel. In our work, we show that the use of gaze in conjunction with state-action knowledge can improve reward learning via Bayesian inverse reinforcement learning (BIRL) @cite_8 .
{ "cite_N": [ "@cite_8" ], "mid": [ "1591675293" ], "abstract": [ "Inverse Reinforcement Learning (IRL) is the problem of learning the reward function underlying a Markov Decision Process given the dynamics of the system and the behaviour of an expert. IRL is motivated by situations where knowledge of the rewards is a goal by itself (as in preference elicitation) and by the task of apprenticeship learning (learning policies from an expert). In this paper we show how to combine prior knowledge and evidence from the expert's actions to derive a probability distribution over the space of reward functions. We present efficient algorithms that find solutions for the reward learning and apprenticeship learning tasks that generalize well over these distributions. Experimental results show strong improvement for our methods over previous heuristic-based approaches." ] }
1907.07384
2962029365
Mutual information has been successfully adopted in filter feature-selection methods to assess both the relevancy of a subset of features in predicting the target variable and the redundancy with respect to other variables. However, existing algorithms are mostly heuristic and do not offer any guarantee on the proposed solution. In this paper, we provide novel theoretical results showing that conditional mutual information naturally arises when bounding the ideal regression classification errors achieved by different subsets of features. Leveraging on these insights, we propose a novel stopping condition for backward and forward greedy methods which ensures that the ideal prediction error using the selected feature subset remains bounded by a user-specified threshold. We provide numerical simulations to support our theoretical claims and compare to common heuristic methods.
A related theoretical study of feature selection via MI has been recently proposed by @cite_25 . The authors show that the problem of finding the minimal feature subset such that the conditional likelihood of the targets is maximized is equivalent to minimizing the CMI. Based on this result, common heuristics for information-theoretic feature selection can be seen as iteratively maximizing the conditional likelihood. Similarly, we show a connection between the CMI and the optimal prediction error. Differently from @cite_25 , we additionally propose a novel stopping condition that is well motivated by our theoretical findings.
{ "cite_N": [ "@cite_25" ], "mid": [ "2156504490" ], "abstract": [ "We present a unifying framework for information theoretic feature selection, bringing almost two decades of research on heuristic filter criteria under a single theoretical interpretation. This is in response to the question: \"what are the implicit statistical assumptions of feature selection criteria based on mutual information?\". To answer this, we adopt a different strategy than is usual in the feature selection literature--instead of trying to define a criterion, we derive one, directly from a clearly specified objective function: the conditional likelihood of the training labels. While many hand-designed heuristic criteria try to optimize a definition of feature 'relevancy' and 'redundancy', our approach leads to a probabilistic framework which naturally incorporates these concepts. As a result we can unify the numerous criteria published over the last two decades, and show them to be low-order approximations to the exact (but intractable) optimisation problem. The primary contribution is to show that common heuristics for information based feature selection (including Markov Blanket algorithms as a special case) are approximate iterative maximisers of the conditional likelihood. A large empirical study provides strong evidence to favour certain classes of criteria, in particular those that balance the relative size of the relevancy redundancy terms. Overall we conclude that the JMI criterion (Yang and Moody, 1999; , 2008) provides the best tradeoff in terms of accuracy, stability, and flexibility with small data samples." ] }
1907.07384
2962029365
Mutual information has been successfully adopted in filter feature-selection methods to assess both the relevancy of a subset of features in predicting the target variable and the redundancy with respect to other variables. However, existing algorithms are mostly heuristic and do not offer any guarantee on the proposed solution. In this paper, we provide novel theoretical results showing that conditional mutual information naturally arises when bounding the ideal regression classification errors achieved by different subsets of features. Leveraging on these insights, we propose a novel stopping condition for backward and forward greedy methods which ensures that the ideal prediction error using the selected feature subset remains bounded by a user-specified threshold. We provide numerical simulations to support our theoretical claims and compare to common heuristic methods.
In the information theory literature, @cite_13 also analyzes the connection between CMI and minimum mean square error, deriving a similar result to our Theorem . However, classification problems (i.e., minimum zero-one loss) are not considered and the focus is not on feature selection.
{ "cite_N": [ "@cite_13" ], "mid": [ "2164714023" ], "abstract": [ "In addition to exploring its various regularity properties, we show that the minimum mean-square error (MMSE) is a concave functional of the input-output joint distribution. In the case of additive Gaussian noise, the MMSE is shown to be weakly continuous in the input distribution and Lipschitz continuous with respect to the quadratic Wasserstein distance for peak-limited inputs. Regularity properties of mutual information are also obtained. Several applications to information theory and the central limit theorem are discussed." ] }
1907.07384
2962029365
Mutual information has been successfully adopted in filter feature-selection methods to assess both the relevancy of a subset of features in predicting the target variable and the redundancy with respect to other variables. However, existing algorithms are mostly heuristic and do not offer any guarantee on the proposed solution. In this paper, we provide novel theoretical results showing that conditional mutual information naturally arises when bounding the ideal regression classification errors achieved by different subsets of features. Leveraging on these insights, we propose a novel stopping condition for backward and forward greedy methods which ensures that the ideal prediction error using the selected feature subset remains bounded by a user-specified threshold. We provide numerical simulations to support our theoretical claims and compare to common heuristic methods.
The authors of @cite_17 propose a nearest neighbor estimator for the CMI and show how it can be used in a classic forward feature selection algorithm. One of the authors' questions is how to devise a suitable stopping condition for such methods. Here we propose a possible answer: our stopping criterion (Section ) is intuitive, applicable to both forward and backward algorithms, and theoretically well-grounded.
{ "cite_N": [ "@cite_17" ], "mid": [ "2044956198" ], "abstract": [ "Mutual information (MI) is used in feature selection to evaluate two key-properties of optimal features, the relevance of a feature to the class variable and the redundancy of similar features. Conditional mutual information (CMI), i.e., MI of the candidate feature to the class variable conditioning on the features already selected, is a natural extension of MI but not so far applied due to estimation complications for high dimensional distributions. We propose the nearest neighbor estimate of CMI, appropriate for high-dimensional variables, and build an iterative scheme for sequential feature selection with a termination criterion, called CMINN. We show that CMINN is equivalent to feature selection MI filters, such as mRMR and MaxiMin, in the presence of solely single feature effects, and more appropriate for combined feature effects. We compare CMINN to mRMR and MaxiMin on simulated datasets involving combined effects and confirm the superiority of CMINN in selecting the correct features (indicated also by the termination criterion) and giving best classification accuracy. The application to ten benchmark databases shows that CMINN obtains the same or higher classification accuracy compared to mRMR and MaxiMin at a smaller cardinality of the selected feature subset." ] }
1907.07384
2962029365
Mutual information has been successfully adopted in filter feature-selection methods to assess both the relevancy of a subset of features in predicting the target variable and the redundancy with respect to other variables. However, existing algorithms are mostly heuristic and do not offer any guarantee on the proposed solution. In this paper, we provide novel theoretical results showing that conditional mutual information naturally arises when bounding the ideal regression classification errors achieved by different subsets of features. Leveraging on these insights, we propose a novel stopping condition for backward and forward greedy methods which ensures that the ideal prediction error using the selected feature subset remains bounded by a user-specified threshold. We provide numerical simulations to support our theoretical claims and compare to common heuristic methods.
Several existing approaches use linear correlation measures to score the different features @cite_31 @cite_11 @cite_5 @cite_6 @cite_7 . Such algorithms are mostly based on the heuristic intuition that a good feature should be highly correlated with the class and lowly correlated with the other features. Instead, we provide a more theoretical justification for this claim (Section ), showing a connection between these two properties and the minimum MSE.
{ "cite_N": [ "@cite_7", "@cite_6", "@cite_5", "@cite_31", "@cite_11" ], "mid": [ "33196642", "2013396429", "1661871015", "2054033094", "1495061682" ], "abstract": [ "Feature selection is a preprocessing phase to machine learning, which leads to increase the classification accuracy and reduce its complexity. However, the increase of data dimensionality poses a challenge to many existing feature selection methods. This paper formulates and validates a method for selecting optimal feature subset based on the analysis of the Pearson correlation coefficients. We adopt the correlation analysis between two variables as a feature goodness measure. Where, a feature is good if it is highly correlated to the class and is low correlated to the other features. To evaluate the proposed Feature selection method, experiments are applied on NSL-KDD dataset. The experiments shows that, the number of features is reduced from 41 to 17 features, which leads to improve the classification accuracy to 99.1 . Also,The efficiency of the proposed linear correlation feature selection method is demonstrated through extensive comparisons with other well known feature selection methods.", "Battiti's mutual information feature selector (MIFS) and its variant algorithms are used for many classification applications. Since they ignore feature synergy, MIFS and its variants may cause a big bias when features are combined to cooperate together. Besides, MIFS and its variants estimate feature redundancy regardless of the corresponding classification task. In this paper, we propose an automated greedy feature selection algorithm called conditional mutual information-based feature selection (CMIFS). Based on the link between interaction information and conditional mutual information, CMIFS takes account of both redundancy and synergy interactions of features and identifies discriminative features. In addition, CMIFS combines feature redundancy evaluation with classification tasks. It can decrease the probability of mistaking important features as redundant features in searching process. The experimental results show that CMIFS can achieve higher best-classification-accuracy than MIFS and its variants, with the same or less (nearly 50 ) number of features.", "Feature selection, as a preprocessing step to machine learning, is effective in reducing dimensionality, removing irrelevant data, increasing learning accuracy, and improving result comprehensibility. However, the recent increase of dimensionality of data poses a severe challenge to many existing feature selection methods with respect to efficiency and effectiveness. In this work, we introduce a novel concept, predominant correlation, and propose a fast filter method which can identify relevant features as well as redundancy among relevant features without pairwise correlation analysis. The efficiency and effectiveness of our method is demonstrated through extensive comparisons with other methods using real-world data of high dimensionality", "Features which are linearly dependent on other features do not contribute toward pattern classification by linear techniques. In order to detect the linearly dependent features, a measure of linear dependence is proposed. This measure is used as an aid in feature selection. Speaker verification experiments demonstrate the usefulness of employing such a measure in practice.", "A central problem in machine learning is identifying a representative set of features from which to construct a classification model for a particular task. This thesis addresses the problem of feature selection for machine learning through a correlation based approach. The central hypothesis is that good feature sets contain features that are highly correlated with the class, yet uncorrelated with each other. A feature evaluation formula, based on ideas from test theory, provides an operational definition of this hypothesis. CFS (Correlation based Feature Selection) is an algorithm that couples this evaluation formula with an appropriate correlation measure and a heuristic search strategy. CFS was evaluated by experiments on artificial and natural datasets. Three machine learning algorithms were used: C4.5 (a decision tree learner), IB1 (an instance based learner), and naive Bayes. Experiments on artificial datasets showed that CFS quickly identifies and screens irrelevant, redundant, and noisy features, and identifies relevant features as long as their relevance does not strongly depend on other features. On natural domains, CFS typically eliminated well over half the features. In most cases, classification accuracy using the reduced feature set equaled or bettered accuracy using the complete feature set. Feature selection degraded machine learning performance in cases where some features were eliminated which were highly predictive of very small areas of the instance space. Further experiments compared CFS with a wrapper—a well known approach to feature selection that employs the target learning algorithm to evaluate feature sets. In many cases CFS gave comparable results to the wrapper, and in general, outperformed the wrapper on small datasets. CFS executes many times faster than the wrapper, which allows it to scale to larger datasets. Two methods of extending CFS to handle feature interaction are presented and experimentally evaluated. The first considers pairs of features and the second incorporates iii feature weights calculated by the RELIEF algorithm. Experiments on artificial domains showed that both methods were able to identify interacting features. On natural domains, the pairwise method gave more reliable results than using weights provided by RELIEF." ] }
1907.07240
2966730110
Social media has become an integral part of our daily lives. During time-critical events, the public shares a variety of posts on social media including reports for resource needs, damages, and help offerings for the affected community. Such posts can be relevant and may contain valuable situational awareness information. However, the information overload of social media challenges the timely processing and extraction of relevant information by the emergency services. Furthermore, the growing usage of multimedia content in the social media posts in recent years further adds to the challenge in timely mining relevant information from social media. In this paper, we present a novel method for multimodal relevancy classification of social media posts, where relevancy is defined with respect to the information needs of emergency management agencies. Specifically, we experiment with the combination of semantic textual features with the image features to efficiently classify a relevant multimodal social media post. We validate our method using an evaluation of classifying the data from three real-world crisis events. Our experiments demonstrate that features based on the proposed hybrid framework of exploiting both textual and image content improve the performance of identifying relevant posts. In the light of these experiments, the application of the proposed classification method could reduce cognitive load on emergency services, in filtering multimodal public posts at large scale.
There has been extensive research on the topic of social media for emergency management in the last decade @cite_5 @cite_6 . The nature of data generated over social media has such a high volume, variety, and velocity causing the challenges of Big Crisis Data'' that often overwhelm the emergency services @cite_6 . The literature in crisis informatics @cite_17 field has investigated social media for emergency services using diverse multidisciplinary perspectives. User studies with emergency responders have identified information overload as one the key barrier for efficiently using social media platforms by PIOs and emergency services @cite_11 @cite_6 . Such information overload factors include the processing of unstructured and noisy nature of multimodal social media content at large scale, which is beyond the capacity of the limited human resources. Furthermore, characterizing the relevancy of social media content is very contextual, time-sensitive, and often challenging @cite_0 .
{ "cite_N": [ "@cite_17", "@cite_6", "@cite_0", "@cite_5", "@cite_11" ], "mid": [ "2472441007", "2493475353", "2898274636", "1934362406", "2573190752" ], "abstract": [ "Crisis informatics is a multidisciplinary field combining computing and social science knowledge of disasters; its central tenet is that people use personal information and communication technology to respond to disaster in creative ways to cope with uncertainty. We study and develop computational support for collection and sociobehavioral analysis of online participation (i.e., tweets and Facebook posts) to address challenges in disaster warning, response, and recovery. Because such data are rarely tidy, we offer lessons—learned the hard way, as we have made every mistake described below—with respect to the opportunities and limitations of social media research on crisis events.", "", "The public expects a prompt response from emergency services to address requests for help posted on social media. However, the information overload of social media experienced by these organizations, coupled with their limited human resources, challenges them to timely identify and prioritize critical requests. This is particularly acute in crisis situations where any delay may have a severe impact on the effectiveness of the response. While social media has been extensively studied during crises, there is limited work on formally characterizing serviceable help requests and automatically prioritizing them for a timely response. In this paper, we present a formal model of serviceability called Social-EOC (Social Emergency Operations Center), which describes the elements of a serviceable message posted in social media that can be expressed as a request. We also describe a system for the discovery and ranking of highly serviceable requests, based on the proposed serviceability model. We validate the model for emergency services, by performing an evaluation based on real-world data from six crises, with ground truth provided by emergency management practitioners. Our experiments demonstrate that features based on the serviceability model improve the performance of discovering and ranking (nDCG up to 25 ) service requests over different baselines. In the light of these experiments, the application of the serviceability model could reduce the cognitive load on emergency operation center personnel, in filtering and ranking public requests at scale.", "Social media platforms provide active communication channels during mass convergence and emergency events such as disasters caused by natural hazards. As a result, first responders, decision makers, and the public can use this information to gain insight into the situation as it unfolds. In particular, many social media messages communicated during emergencies convey timely, actionable information. Processing social media messages to obtain such information, however, involves solving multiple challenges including: parsing brief and informal messages, handling information overload, and prioritizing different types of information found in messages. These challenges can be mapped to classical information processing operations such as filtering, classifying, ranking, aggregating, extracting, and summarizing. We survey the state of the art regarding computational methods to process social media messages and highlight both their contributions and shortcomings. In addition, we examine their particularities, and methodically examine a series of key subproblems ranging from the detection of events to the creation of actionable and useful summaries. Research thus far has, to a large extent, produced methods to extract situational awareness information from social media. In this survey, we cover these various approaches, and highlight their benefits and shortcomings. We conclude with research challenges that go beyond situational awareness, and begin to look at supporting decision making and coordinating emergency-response actions.", "Semi-structured interviews were conducted with U.S. public sector emergency managers to probe barriers to use of social media and reactions to possible software enhancements to support such use. The three most frequently described barriers were lack of personnel time to work on use of social media, lack of policies and guidelines for its use, and concern about the trustworthiness of pulled data. The most popular of the possible technological enhancements described for Twitter are filtering by category of user contributor, and display of posts on a GIS system with a map-based display." ] }
1907.07240
2966730110
Social media has become an integral part of our daily lives. During time-critical events, the public shares a variety of posts on social media including reports for resource needs, damages, and help offerings for the affected community. Such posts can be relevant and may contain valuable situational awareness information. However, the information overload of social media challenges the timely processing and extraction of relevant information by the emergency services. Furthermore, the growing usage of multimedia content in the social media posts in recent years further adds to the challenge in timely mining relevant information from social media. In this paper, we present a novel method for multimodal relevancy classification of social media posts, where relevancy is defined with respect to the information needs of emergency management agencies. Specifically, we experiment with the combination of semantic textual features with the image features to efficiently classify a relevant multimodal social media post. We validate our method using an evaluation of classifying the data from three real-world crisis events. Our experiments demonstrate that features based on the proposed hybrid framework of exploiting both textual and image content improve the performance of identifying relevant posts. In the light of these experiments, the application of the proposed classification method could reduce cognitive load on emergency services, in filtering multimodal public posts at large scale.
Among the social media analytics approaches, researchers have modeled public behavior in specific emergencies, addressed the problems of data collection and filtering, classification and summarization as well as visualization of analyzed data for decision support @cite_5 . However, the focus of such works has centered around text analytics except recent studies @cite_13 @cite_15 @cite_4 @cite_12 on processing multimedia content of the social posts. Although current multimodal information processing approaches for social media mining during disasters primarily analyzed only the damage assessment aspect of emergency management. We further complement these recent studies by proposing a generic classification framework for relevant information that exploits both textual and image content of multimodal social media posts.
{ "cite_N": [ "@cite_4", "@cite_5", "@cite_15", "@cite_13", "@cite_12" ], "mid": [ "2768447691", "1934362406", "2962870381", "2963002801", "2626583996" ], "abstract": [ "Rapid access to situation-sensitive data through social media networks creates new opportunities to address a number of real-world problems. Damage assessment during disasters is a core situational awareness task for many humanitarian organizations that traditionally takes weeks and months. In this work, we analyze images posted on social media platforms during natural disasters to determine the level of damage caused by the disasters. We employ state-of-the-art machine learning techniques to perform an extensive experimentation of damage assessment using images from four major natural disasters. We show that the domain-specific fine-tuning of deep Convolutional Neural Networks (CNN) outperforms other state-of-the-art techniques such as Bag-of-Visual-Words (BoVW). High classification accuracy under both event-specific and cross-event test settings demonstrate that the proposed approach can effectively adapt deep-CNN features to identify the severity of destruction from social media images taken after a disaster strikes.", "Social media platforms provide active communication channels during mass convergence and emergency events such as disasters caused by natural hazards. As a result, first responders, decision makers, and the public can use this information to gain insight into the situation as it unfolds. In particular, many social media messages communicated during emergencies convey timely, actionable information. Processing social media messages to obtain such information, however, involves solving multiple challenges including: parsing brief and informal messages, handling information overload, and prioritizing different types of information found in messages. These challenges can be mapped to classical information processing operations such as filtering, classifying, ranking, aggregating, extracting, and summarizing. We survey the state of the art regarding computational methods to process social media messages and highlight both their contributions and shortcomings. In addition, we examine their particularities, and methodically examine a series of key subproblems ranging from the detection of events to the creation of actionable and useful summaries. Research thus far has, to a large extent, produced methods to extract situational awareness information from social media. In this survey, we cover these various approaches, and highlight their benefits and shortcomings. We conclude with research challenges that go beyond situational awareness, and begin to look at supporting decision making and coordinating emergency-response actions.", "The CrisisMMD multimodal Twitter dataset consists of several thousands of manually annotated tweets and images collected during seven major natural disasters including earthquakes, hurricanes, wildfires, and floods that happened in the year 2017 across different parts of the World. The provided datasets include three types of annotations. Informative vs not-informative: Informative Not informative Don’t know or can’t judge Humanitarian categories Affected individuals Infrastructure and utility damage Injured or dead people Missing or found people Rescue, volunteering or donation effort Vehicle damage Other relevant information Not relevant or can’t judge Damage severity assessment Severe damage Mild damage Little or no damage Don’t know or can’t judge Please use alternate download link CrisisNLP, in case if you are having problem in downloading from this site. You can also get tweet ids and a tweet downloader tool too.", "", "Computer Vision and Image Processing are emerging research paradigms. The increasing popularity of social media, micro- blogging services and ubiquitous availability of high-resolution smartphone cameras with pervasive connectivity are propelling our digital footprints and cyber activities. Such online human footprints related with an event-of-interest, if mined appropriately, can provide meaningful information to analyze the current course and pre- and post- impact leading to the organizational planning of various real-time smart city applications. In this paper, we investigate the narrative (texts) and visual (images) components of Twitter feeds to improve the results of queries by exploiting the deep contexts of each data modality. We employ Latent Semantic Analysis (LSA)-based techniques to analyze the texts and Discrete Cosine Transformation (DCT) to analyze the images which help establish the cross-correlations between the textual and image dimensions of a query. While each of the data dimensions helps improve the results of a specific query on its own, the contributions from the dual modalities can potentially provide insights that are greater than what can be obtained from the individual modalities. We validate our proposed approach using real Twitter feeds from a recent devastating flash flood in Ellicott City near the University of Maryland campus. Our results show that the images and texts can be classified with 67 and 94 accuracies respectively" ] }
1907.07378
2957866194
Competency Questions (CQs) for an ontology and similar artefacts aim to provide insights into the contents of an ontology and to demarcate its scope. The absence of a controlled natural language, tooling and automation to support the authoring of CQs has hampered their effective use in ontology development and evaluation. The few question templates that exists are based on informal analyses of a small number of CQs and have limited coverage of question types and sentence constructions. We aim to fill this gap by proposing a template-based CNL to author CQs, called CLaRO. For its design, we exploited a new dataset of 234 CQs that had been processed automatically into 106 patterns, which we analysed and used to design a template-based CNL, with an additional CNL model and XML serialisation. The CNL was evaluated with a subset of questions from the original dataset and with two sets of newly sourced CQs. The coverage of CLaRO, with its 93 main templates and 41 linguistic variants, is about 90 for unseen questions. CLaRO has the potential to facilitate streamlining formalising ontology content requirements and, given that about one third of the competency questions in the test sets turned out to be invalid questions, assist in writing good questions.
Given that a CNL for CQs is supposed to function for specifying requirements for any ontology, the logic-based knowledge representation must be decoupled from the natural language. At the same time, it is well-known that the other extreme---free-form sentences---makes it exceedingly hard to formalise, be this for query or axiom generation; e.g., most recently, 's system allows free-text as input, but only four types of questions may generate answers in their IR-based approach (some definition questions, yes no, facts, and lists) @cite_7 . A middle way to bridge this gap is to design a CNL.
{ "cite_N": [ "@cite_7" ], "mid": [ "2888712326" ], "abstract": [ "In the context of the development of a virtual tutor to support distance learning courses, this paper presents an approach to solve the problem of automatically answering questions posed by students in a natural language (Portuguese). Our approach is based on three main pillars: an ontology, a conversion process, and a querying process. First, the ontology, was built to model the knowledge regarding all aspects of the course; second, we defined a way of converting a natural language question to a SPARQL query; finally, the SPARQL query is executed and the result extracted from the ontology. Focusing on the second pillar mentioned above (the conversion of a NL question), we explain the whole process and present the results regarding a set of preliminary experiments." ] }
1907.07378
2957866194
Competency Questions (CQs) for an ontology and similar artefacts aim to provide insights into the contents of an ontology and to demarcate its scope. The absence of a controlled natural language, tooling and automation to support the authoring of CQs has hampered their effective use in ontology development and evaluation. The few question templates that exists are based on informal analyses of a small number of CQs and have limited coverage of question types and sentence constructions. We aim to fill this gap by proposing a template-based CNL to author CQs, called CLaRO. For its design, we exploited a new dataset of 234 CQs that had been processed automatically into 106 patterns, which we analysed and used to design a template-based CNL, with an additional CNL model and XML serialisation. The CNL was evaluated with a subset of questions from the original dataset and with two sets of newly sourced CQs. The coverage of CLaRO, with its 93 main templates and 41 linguistic variants, is about 90 for unseen questions. CLaRO has the potential to facilitate streamlining formalising ontology content requirements and, given that about one third of the competency questions in the test sets turned out to be invalid questions, assist in writing good questions.
CNLs for computation have been proposed as a solution for various information management aspects, such as query formulation to hide SPARQL syntax (e.g., Sparklis @cite_20 and Quelo @cite_15 ), generation of pseudo-NL sentences from axioms in an ontology to formalise them (e.g., ACE @cite_22 ), and software requirements formulation with, notably, the Semantics of Business Vocabulary and Rules (SBVR) @cite_24 . Recent literature reviews on CNLs within the scope of the Semantic Web can be found in @cite_19 @cite_16 and more broadly on CNLs in @cite_3 . They all---22 tools and proposals in @cite_16 and 22 in @cite_19 ---focus on assertions for ontology authoring, even those for queries, such as give me all writers who ...'' rather than which writers...?'', and even where they are questions, they are for instances, rather than the TBox-level of typical CQs, hence, take a different form.
{ "cite_N": [ "@cite_22", "@cite_3", "@cite_24", "@cite_19", "@cite_15", "@cite_16", "@cite_20" ], "mid": [ "1562074255", "2101493333", "", "1832719046", "2141934106", "2330889199", "2260667602" ], "abstract": [ "This technical report describes the discourse representation structures (DRS) derived from texts written in version 5 of Attempto Controlled English (ACE 5). Among other things, ACE 5 supports modal statements, negation as failure, and sentence subordination. These features require an extended form of discourse representation structures. The discourse representation structure itself uses a reified, or ‘flat’ notation, meaning that its atomic conditions are built from a small number of predefined predicates that take constants standing for words of the ACE text as their arguments. Furthermore, each logical atom gets an index relating it to the sentence of the ACE text from which it was derived.", "What is here called controlled natural language CNL has traditionally been given many different names. Especially during the last four decades, a wide variety of such languages have been designed. They are applied to improve communication among humans, to improve translation, or to provide natural and intuitive representations for formal notations. Despite the apparent differences, it seems sensible to put all these languages under the same umbrella. To bring order to the variety of languages, a general classification scheme is presented here. A comprehensive survey of existing English-based CNLs is given, listing and describing 100 languages from 1930 until today. Classification of these languages reveals that they form a single scattered cloud filling the conceptual space between natural languages such as English on the one end and formal languages such as propositional logic on the other. The goal of this article is to provide a common terminology and a common model for CNL, to contribute to the understanding of their general nature, to provide a starting point for researchers interested in the area, and to help developers to make design decisions.", "", "Natural Language Generation (NLG) is concerned with transforming given content input into a natural language out- put, given some communicative goal. Although this input can take various forms and representations, it is the semantic conceptual representations that have always been considered as the \"natural\" starting ground for NLG. Therefore, it is natural that the Se- mantic Web (SW), with its machine-processable representation of information with explicitly defined semantics, has attracted the interest of NLG practitioners from early on. We attempt to provide an overview of the main paradigms of NLG from SW data, emphasizing how the Semantic Web provides opportunities for the NLG community to improve their state-of-the-art ap- proaches whilst bringing about challenges that need to be addressed before we can speak of a real symbiosis between NLG and the Semantic Web.", "In this paper we present a formal framework and an experimental software supporting the user in the task of formulating a precise query ‐ which best captures their information needs ‐ even in the case of complete ignorance of the vocabulary of the underlying information system holding the data. Our intelligent interface is driven by means of appropriate automated reasoning techniques over an ontology describing the domain of the data in the information system. We will define what a query is and how it is internally represented, which operations are available to the user in order to modify the query and how contextual feedback is provided about it presenting only relevant pieces of information. We will then describe the elements that constitute the query interface available to the user, providing visual access to the underlying reasoning services and operations for query manipulation. Lastly, we will define a suitable representation in “linear form”, starting from which the query can be more easily expressed in natural language. Author Keywords", "One of the core challenges for building the semantic web is the creation of ontologies, a process known as ontology authoring. Controlled natural languages (CNLs) propose different frameworks for interfacing and creating ontologies in semantic web systems using restricted natural language. However, in order to engage non-expert users with no background in knowledge engineering, these language interfacing must be reliable, easy to understand and accepted by users. This paper includes the state-of-the-art for CNLs in terms of ontology authoring and the semantic web. In addition, it includes a detailed analysis of user evaluations with respect to each CNL and offers analytic conclusions with respect to the field.", "SPARKLIS is a Semantic Web tool that helps users explore and query SPARQL endpoints by guiding them in the interactive building of questions and answers, from simple ones to complex ones. It combines the fine-grained guidance of faceted search, most of the expressivity of SPARQL, and the readability of (controlled) natural languages. No knowledge of the vocabulary and schema are required for users. Many SPARQL features are covered: multidimensional queries, union, negation, optional, filters, aggregations, ordering. Queries are verbalized in either English or French, so that no knowledge of SPARQL is ever necessary. All of this is implemented in a portable Web application, SPARKLIS, and has been evaluated on many endpoints and questions. No endpoint-specific configuration is necessary as the data schema is discovered on the fly by the tool. Online since April 2014, thousands of queries have been formed by hundreds of users over more than a hundred endpoints." ] }
1907.07171
2959108703
An open secret in contemporary machine learning is that many models work beautifully on standard benchmarks but fail to generalize outside the lab. This has been attributed to training on biased data, which provide poor coverage over real world events. Generative models are no exception, but recent advances in generative adversarial networks (GANs) suggest otherwise -- these models can now synthesize strikingly realistic and diverse images. Is generative modeling of photos a solved problem? We show that although current GANs can fit standard datasets very well, they still fall short of being comprehensive models of the visual manifold. In particular, we study their ability to fit simple transformations such as camera movements and color changes. We find that the models reflect the biases of the datasets on which they are trained (e.g., centered objects), but that they also exhibit some capacity for generalization: by "steering" in latent space, we can shift the distribution while still creating realistic images. We hypothesize that the degree of distributional shift is related to the breadth of the training data distribution, and conduct experiments that demonstrate this. Code is released on our project page: this https URL
Biases from training data and network architecture both factor into the generalization capacity of learned models @cite_16 @cite_8 @cite_17 . Dataset biases partly comes from human preferences in taking photos: we typically capture images in specific canonical'' views that are not fully representative of the entire visual world @cite_20 @cite_13 . When models are trained to fit these datasets, they inherit the biases in the data. Such biases may result in models that misrepresent the given task -- such as tendencies towards texture bias rather than shape bias on ImageNet classifiers @cite_8 -- which in turn limits their generalization performance on similar objectives @cite_6 . Our latent space trajectories transform the output corresponding to various camera motion and image editing operations, but ultimately we are constrained by biases in the data and cannot extrapolate arbitrarily far beyond the dataset.
{ "cite_N": [ "@cite_8", "@cite_6", "@cite_16", "@cite_13", "@cite_20", "@cite_17" ], "mid": [ "2952610664", "2807007689", "2031342017", "1987947320", "2101734194", "2957285709" ], "abstract": [ "Convolutional Neural Networks (CNNs) are commonly thought to recognise objects by learning increasingly complex representations of object shapes. Some recent studies suggest a more important role of image textures. We here put these conflicting hypotheses to a quantitative test by evaluating CNNs and human observers on images with a texture-shape cue conflict. We show that ImageNet-trained CNNs are strongly biased towards recognising textures rather than shapes, which is in stark contrast to human behavioural evidence and reveals fundamentally different classification strategies. We then demonstrate that the same standard architecture (ResNet-50) that learns a texture-based representation on ImageNet is able to learn a shape-based representation instead when trained on \"Stylized-ImageNet\", a stylized version of ImageNet. This provides a much better fit for human behavioural performance in our well-controlled psychophysical lab setting (nine experiments totalling 48,560 psychophysical trials across 97 observers) and comes with a number of unexpected emergent benefits such as improved object detection performance and previously unseen robustness towards a wide range of image distortions, highlighting advantages of a shape-based representation.", "Deep convolutional network architectures are often assumed to guarantee generalization for small image translations and deformations. In this paper we show that modern CNNs (VGG16, ResNet50, and InceptionResNetV2) can drastically change their output when an image is translated in the image plane by a few pixels, and that this failure of generalization also happens with other realistic small image transformations. Furthermore, the deeper the network the more we see these failures to generalize. We show that these failures are related to the fact that the architecture of modern CNNs ignores the classical sampling theorem so that generalization is not guaranteed. We also show that biases in the statistics of commonly used image datasets makes it unlikely that CNNs will learn to be invariant to these transformations. Taken together our results suggest that the performance of CNNs in object recognition falls far short of the generalization capabilities of humans.", "Datasets are an integral part of contemporary object recognition research. They have been the chief reason for the considerable progress in the field, not just as source of large amounts of training data, but also as means of measuring and comparing performance of competing algorithms. At the same time, datasets have often been blamed for narrowing the focus of object recognition research, reducing it to a single benchmark performance number. Indeed, some datasets, that started out as data capture efforts aimed at representing the visual world, have become closed worlds unto themselves (e.g. the Corel world, the Caltech-101 world, the PASCAL VOC world). With the focus on beating the latest benchmark numbers on the latest dataset, have we perhaps lost sight of the original purpose? The goal of this paper is to take stock of the current state of recognition datasets. We present a comparison study using a set of popular datasets, evaluated based on a number of criteria including: relative data bias, cross-dataset generalization, effects of closed-world assumption, and sample value. The experimental results, some rather surprising, suggest directions that can improve dataset collection as well as algorithm evaluation protocols. But more broadly, the hope is to stimulate discussion in the community regarding this very important, but largely neglected issue.", "The concept of visual balance is innate for humans, and influences how we perceive visual aesthetics and cognize harmony. Although visual balance is a vital principle of design and taught in schools of designs, it is barely quantified. On the other hand, with emergence of automantic semi-automatic visual designs for self-publishing, learning visual balance and computationally modeling it, may escalate aesthetics of such designs. In this paper, we present how questing for understanding visual balance inspired us to revisit one of the well-known theories in visual arts, the so called theory of “visual rightness”, elucidated by Arnheim. We define Arnheim’s hypothesis as a design mining problem with the goal of learning visual balance from work of professionals. We collected a dataset of 120K images that are aesthetically highly rated, from a professional photography website. We then computed factors that contribute to visual balance based on the notion of visual saliency. We fitted a mixture of Gaussians to the saliency maps of the images, and obtained the hotspots of the images. Our inferred Gaussians align with Arnheim’s hotspots, and confirm his theory. Moreover, the results support the viability of the center of mass, symmetry, as well as the Rule of Thirds in our dataset.", "Although human object recognition is supposedly robust to viewpoint, much research on human perception indicates that there is a preferred or \"canonical\" view of objects. This phenomenon was discovered more than 30 years ago but the canonical view of only a small number of categories has been validated experimentally. Moreover, the explanation for why humans prefer the canonical view over other views remains elusive. In this paper we ask: Can we use Internet image collections to learn more about canonical views? We start by manually finding the most common view in the results returned by Internet search engines when queried with the objects used in psychophysical experiments. Our results clearly show that the most likely view in the search engine corresponds to the same view preferred by human subjects in experiments. We also present a simple method to find the most likely view in an image collection and apply it to hundreds of categories. Using the new data we have collected we present strong evidence against the two most prominent formal theories of canonical views and provide novel constraints for new theories.", "" ] }
1907.07171
2959108703
An open secret in contemporary machine learning is that many models work beautifully on standard benchmarks but fail to generalize outside the lab. This has been attributed to training on biased data, which provide poor coverage over real world events. Generative models are no exception, but recent advances in generative adversarial networks (GANs) suggest otherwise -- these models can now synthesize strikingly realistic and diverse images. Is generative modeling of photos a solved problem? We show that although current GANs can fit standard datasets very well, they still fall short of being comprehensive models of the visual manifold. In particular, we study their ability to fit simple transformations such as camera movements and color changes. We find that the models reflect the biases of the datasets on which they are trained (e.g., centered objects), but that they also exhibit some capacity for generalization: by "steering" in latent space, we can shift the distribution while still creating realistic images. We hypothesize that the degree of distributional shift is related to the breadth of the training data distribution, and conduct experiments that demonstrate this. Code is released on our project page: this https URL
The recent progress in generative models has enabled interesting applications for content creation @cite_11 @cite_14 , including variants that enable end users to control and fine-tune the generated output @cite_3 @cite_19 @cite_12 . A by-product the current work is to further enable users to modify various image properties by turning a single knob -- the magnitude of the learned transformation. Similar to @cite_24 , we show that GANs allow users to achieve basic image editing operations by manipulating the latent space. However, we further demonstrate that these image manipulations are not just a simple creativity tool; they also provide us with a window into biases and generalization capacity of these models.
{ "cite_N": [ "@cite_14", "@cite_3", "@cite_24", "@cite_19", "@cite_12", "@cite_11" ], "mid": [ "2904367110", "", "2519536754", "", "2901107321", "2952716587" ], "abstract": [ "We propose an alternative generator architecture for generative adversarial networks, borrowing from style transfer literature. The new architecture leads to an automatically learned, unsupervised separation of high-level attributes (e.g., pose and identity when trained on human faces) and stochastic variation in the generated images (e.g., freckles, hair), and it enables intuitive, scale-specific control of the synthesis. The new generator improves the state-of-the-art in terms of traditional distribution quality metrics, leads to demonstrably better interpolation properties, and also better disentangles the latent factors of variation. To quantify interpolation quality and disentanglement, we propose two new, automated methods that are applicable to any generator architecture. Finally, we introduce a new, highly varied and high-quality dataset of human faces.", "", "Realistic image manipulation is challenging because it requires modifying the image appearance in a user-controlled way, while preserving the realism of the result. Unless the user has considerable artistic skill, it is easy to “fall off” the manifold of natural images while editing. In this paper, we propose to learn the natural image manifold directly from data using a generative adversarial neural network. We then define a class of image editing operations, and constrain their output to lie on that learned manifold at all times. The model automatically adjusts the output keeping all edits as realistic as possible. All our manipulations are expressed in terms of constrained optimization and are applied in near-real time. We evaluate our algorithm on the task of realistic photo manipulation of shape and color. The presented method can further be used for changing one image to look like the other, as well as generating novel imagery from scratch based on user’s scribbles.", "", "Generative Adversarial Networks (GANs) have recently achieved impressive results for many real-world applications, and many GAN variants have emerged with improvements in sample quality and training stability. However, they have not been well visualized or understood. How does a GAN represent our visual world internally? What causes the artifacts in GAN results? How do architectural choices affect GAN learning? Answering such questions could enable us to develop new insights and better models. In this work, we present an analytic framework to visualize and understand GANs at the unit-, object-, and scene-level. We first identify a group of interpretable units that are closely related to object concepts using a segmentation-based network dissection method. Then, we quantify the causal effect of interpretable units by measuring the ability of interventions to control objects in the output. We examine the contextual relationship between these units and their surroundings by inserting the discovered object concepts into new images. We show several practical applications enabled by our framework, from comparing internal representations across different layers, models, and datasets, to improving GANs by locating and removing artifact-causing units, to interactively manipulating objects in a scene. We provide open source interpretation tools to help researchers and practitioners better understand their GAN models.", "Despite recent progress in generative image modeling, successfully generating high-resolution, diverse samples from complex datasets such as ImageNet remains an elusive goal. To this end, we train Generative Adversarial Networks at the largest scale yet attempted, and study the instabilities specific to such scale. We find that applying orthogonal regularization to the generator renders it amenable to a simple \"truncation trick,\" allowing fine control over the trade-off between sample fidelity and variety by reducing the variance of the Generator's input. Our modifications lead to models which set the new state of the art in class-conditional image synthesis. When trained on ImageNet at 128x128 resolution, our models (BigGANs) achieve an Inception Score (IS) of 166.5 and Frechet Inception Distance (FID) of 7.4, improving over the previous best IS of 52.52 and FID of 18.6." ] }
1907.07171
2959108703
An open secret in contemporary machine learning is that many models work beautifully on standard benchmarks but fail to generalize outside the lab. This has been attributed to training on biased data, which provide poor coverage over real world events. Generative models are no exception, but recent advances in generative adversarial networks (GANs) suggest otherwise -- these models can now synthesize strikingly realistic and diverse images. Is generative modeling of photos a solved problem? We show that although current GANs can fit standard datasets very well, they still fall short of being comprehensive models of the visual manifold. In particular, we study their ability to fit simple transformations such as camera movements and color changes. We find that the models reflect the biases of the datasets on which they are trained (e.g., centered objects), but that they also exhibit some capacity for generalization: by "steering" in latent space, we can shift the distribution while still creating realistic images. We hypothesize that the degree of distributional shift is related to the breadth of the training data distribution, and conduct experiments that demonstrate this. Code is released on our project page: this https URL
We note a few concurrent papers that also explore trajectories in GAN latent space. @cite_23 learns linear walks in the latent space that correspond to various facial characteristics; they use these walks to measure biases in facial attribute detectors, whereas we study biases in the generative model that originate from training data. @cite_25 also assumes linear latent space trajectories and learns paths for face attribute editing according to semantic concepts such as age and expression, thus demonstrating disentanglement properties of the latent space. @cite_2 applies a linear walk to achieve transformations in learning and editing features that pertain cognitive properties of an image such as memorability, aesthetics, and emotional valence. Unlike these works, we do not require an attribute detector or assessor function to learn the latent space trajectory, and therefore our loss function is based on image similarity between source and target images. In addition to linear walks, we explore using non-linear walks to achieve camera motion and color transformations.
{ "cite_N": [ "@cite_25", "@cite_23", "@cite_2" ], "mid": [ "2963577681", "2950113871", "2950419363" ], "abstract": [ "Despite the recent advance of Generative Adversarial Networks (GANs) in high-fidelity image synthesis, there lacks enough understandings on how GANs are able to map the latent code sampled from a random distribution to a photo-realistic image. Previous work assumes the latent space learned by GAN follows a distributed representation but observes the vector arithmetic phenomenon of the output's semantics in latent space. In this work, we interpret the semantics hidden in the latent space of well-trained GANs. We find that the latent code for well-trained generative models, such as ProgressiveGAN and StyleGAN, actually learns a disentangled representation after some linear transformations. We make a rigorous analysis on the encoding of various semantics in the latent space as well as their properties, and then study how these semantics are correlated to each other. Based on our analysis, we propose a simple and general technique, called InterFaceGAN, for semantic face editing in latent space. Given a synthesized face, we are able to faithfully edit its various attributes such as pose, expression, age, presence of eyeglasses, without retraining the GAN model. Furthermore, we show that even the artifacts occurred in output images are able to be fixed using same approach. Extensive results suggest that learning to synthesize faces spontaneously brings a disentangled and controllable facial attribute representation", "We introduce a simple framework for identifying biases of a smiling attribute classifier. Our method poses counterfactual questions of the form: how would the prediction change if this face characteristic had been different? We leverage recent advances in generative adversarial networks to build a realistic generative model of face images that affords controlled manipulation of specific image characteristics. We introduce a set of metrics that measure the effect of manipulating a specific property of an image on the output of a trained classifier. Empirically, we identify several different factors of variation that affect the predictions of a smiling classifier trained on CelebA.", "We introduce a framework that uses Generative Adversarial Networks (GANs) to study cognitive properties like memorability, aesthetics, and emotional valence. These attributes are of interest because we do not have a concrete visual definition of what they entail. What does it look like for a dog to be more or less memorable? GANs allow us to generate a manifold of natural-looking images with fine-grained differences in their visual attributes. By navigating this manifold in directions that increase memorability, we can visualize what it looks like for a particular generated image to become more or less memorable. The resulting visual definitions\" surface image properties (like object size\") that may underlie memorability. Through behavioral experiments, we verify that our method indeed discovers image manipulations that causally affect human memory performance. We further demonstrate that the same framework can be used to analyze image aesthetics and emotional valence. Visit the GANalyze website at this http URL." ] }
1907.07349
2960672606
Edge computing in the Internet of Things brings applications and content closer to the users by introducing an additional computational layer at the network infrastructure, between cloud and the resource-constrained data producing devices and user equipment. This way, the opportunistic nature of the operational environment is addressed by introducing computational power in location with low latency and high bandwidth. However, location-aware deployment of edge computing infrastructure requires careful placement scheme for edge servers. To provide the best possible Quality of Service for the user applications, their proximity needs to be optimized. Moreover, the deployment faces practical limitations in budget limitations, hardware requirements of servers and in online load balancing between servers. To address these challenges, we formulate the edge server placement as a capacitated location-allocation problem, while minimizing the distance between servers and access points of a real city-wide Wi-Fi network deployment. In our algorithm, we utilize both upper and lower server capacity constraints for load balancing. Furthermore, we enable sharing of workload between servers to facilitate deployments with low capacity servers. The performance of the algorithm is demonstrated in placement scenarios, exemplified by high capacity servers for edge computing and low capacity servers for Fog computing, with different parameters in a real-world data set. The data set consists of both dense deployment of access points in central areas, but also sparse deployment in suburban areas within the same network infrastructure. In comparison, we show that previous approaches do not sufficiently address such deployment. The presented algorithm is able to provide optimal placements that minimize the distances and provide balanced workload with sharing by following the capacity constraints.
Previous studies on edge server placement have focused on algorithms for clustering access points with the aim to find candidate locations for the servers as cluster heads. In these works, clustering was based on k-means @cite_24 , k-means with mixed-integer quadratic programming @cite_41 , graph theory as in minimum dominating set problem @cite_15 , hierarchical tree-like structures @cite_1 @cite_14 , multi-objective constraint optimization @cite_11 and mixed integer linear programming @cite_30 , and DBScan-clustering combined with optimization based on a facility location problem @cite_3 . A heuristic decision-support management system for server placement was presented in @cite_16 . For clustering, different sets of parameters were considered, such as individual server capacity, the number of servers, geo-locations of servers, minimal latencies and maximized traffic inside the clusters. Typically, co-location is considered where the servers are placed next to access points in a geographical area.
{ "cite_N": [ "@cite_30", "@cite_14", "@cite_41", "@cite_1", "@cite_3", "@cite_24", "@cite_15", "@cite_16", "@cite_11" ], "mid": [ "2792782202", "2739769312", "2916014425", "2947632778", "2887279295", "2597750431", "2906068868", "2514431540", "2809740924" ], "abstract": [ "Mobile edge computing (MEC) is an emerging technology that aims at pushing applications and content close to the users (e.g., at base stations, access points, and aggregation networks) to reduce latency, improve quality of experience, and ensure highly efficient network operation and service delivery. It principally relies on virtualization-enabled MEC servers with limited capacity at the edge of the network. One key issue is to dimension such systems in terms of server size, server number, and server operation area to meet MEC goals. In this paper, we formulate this problem as a mixed integer linear program. We then propose a graph-based algorithm that, taking into account a maximum MEC server capacity, provides a partition of MEC clusters, which consolidates as many communications as possible at the edge. We use a dataset of mobile communications to extensively evaluate them with real world spatio-temporal human dynamics. In addition to quantifying macroscopic MEC benefits, the evaluation shows that our algorithm provides MEC area partitions that largely offload the core, thus pushing the load at the edge (e.g., with 10 small MEC servers between 55 and 64 of the traffic stay at the edge), and that are well balanced through time.", "", "", "In this article, we propose content-centric, in-network content caching and placement approaches that leverage cooperation among edge cloud devices, content popularity, and GPS trajectory information to improve content delivery speeds, network traffic congestion, cache resource utilization efficiency, and users' quality of experience in highly populated cities. More specifically, our proposed approaches exploit collaborative filtering theory to provide accurate and efficient content popularity predictions to enable proactive in-network caching of Internet content. We propose a practical content delivery architecture that consists of standalone edge cloud devices to be deployed in the city to cache and process popular Internet content as it disseminates throughout the network. We also show that our proposed approaches ensure responsive cloud content delivery with minimized service disruption.", "Edge computing provides an attractive platform for bringing data and processing closer to users in networked environments. Several edge proposals aim to place the edge servers at a couple hop distance from the client to ensure lowest possible compute and network delay. An attractive edge server placement is to co-locate it with existing (cellular) base stations to avoid additional infrastructure establishment costs. However, determining the exact locations for edge servers is an important question that must be resolved for optimal placement. In this paper, we present Anveshak1, a framework that solves the problem of placing edge servers in a geographical topology and provides the optimal solution for edge providers. Our proposed solution considers both end-user application requirements as well as deployment and operating costs incurred by edge platform providers. The placement optimization metric of Anveshak considers the request pattern of users and existing user-established edge servers. In our evaluation based on real datasets, we show that Anveshak achieves 67 increase in user satisfaction while maintaining high server utilization.", "A device for selectively moistening the flap of envelopes has a moistening member and a pivoting moistening deflector selectively operable to moisten the flap or not. It further comprises a pivoting slide bar to the front of the moistening member and actuated between a protection position and a retracted position in front of the moistening member. The moistening deflector is adapted to cooperate with the slide bar and the moistening member and is coupled to the bar so that they are actuated virtually simultaneously. The device finds applications in automatic mail handling.", "Remote clouds are gradually unable to achieve ultra-low latency to meet the requirements of mobile users because of the intolerable long distance between remote clouds and mobile users and the network congestion caused by the tremendous number of users. Mobile edge computing, a new paradigm, has been proposed to mitigate aforementioned effects. Existing studies mostly assume the edge servers have been deployed properly and they just pay attention to how to minimize the delay between edge servers and mobile users. In this paper, considering the practical environment, we investigate how to deploy edge servers effectively and economically in wireless metropolitan area networks. Thus, we address the problem of minimizing the number of edge servers while ensuring some QoS requirements. Aiming at more consistence with a generalized condition, we extend the definition of the dominating set, and transform the addressed problem into the minimum dominating set problem in graph theory. In addition, two conditions are considered for the capacities of edge servers: one is that the capacities of edge servers can be configured on demand, and the other is that all the edge servers have the same capacities. For the on-demand condition, a greedy based algorithm is proposed to find the solution, and the key idea is to iteratively choose nodes that can connect as many other nodes as possible under the delay, degree and cluster size constraints. Furthermore, a simulated annealing based approach is given for global optimization. For the second condition, a greedy based algorithm is also proposed to satisfy the capacity constraint of edge servers and minimize the number of edge servers simultaneously. The simulation results show that the proposed algorithms are feasible.", "We present @math , a decision support framework to provision edge servers for online services providers (OSPs). @math takes advantage of the increasingly flexible edge server placement, which is enabled by new technologies such as edge computing platforms, cloudlets and network function virtualization, to optimize the overall performance and cost of edge infrastructures. The key difference between @math and traditional server placement approaches lies on that @math can discover proper unforeseen edge locations which significantly improve the efficiency and reduce the cost of edge provisioning. We show how @math effectively identifies promising edge locations which are close to a collection of users merely with inaccurate network distance estimation methods, e.g., geographic coordinate (GC) and network coordinate systems (NC). We also show how @math comprehensively considers various pragmatic concerns in edge provisioning, such as traffic limits by law or ISP policy, edge site deployment and resource usage cost, over-provisioning for fault tolerance, etc., with a simple optimization model. We simulate @math using real network data at global and county-wide scales. Measurement-driven simulations show that with a given cost budget @math can improve user performance by around 10-45 percent at global scale networks and 15-35 percent at a country-wide scale network.", "Abstract With the rapid increase in the development of the Internet of Things and 5G networks in the smart city context, a large amount of data (i.e., big data) is expected to be generated, resulting in increased latency for the traditional cloud computing paradigm. To reduce the latency, mobile edge computing has been considered for offloading a part of the workload from mobile devices to nearby edge servers that have sufficient computation resources. Although there has been significant research in the field of mobile edge computing, little attention has been given to understanding the placement of edge servers in smart cities to optimize the mobile edge computing network performance. In this paper, we study the edge server placement problem in mobile edge computing environments for smart cities. First, we formulate the problem as a multi-objective constraint optimization problem that places edge servers in some strategic locations with the objective to make balance the workloads of edge servers and minimize the access delay between the mobile user and edge server. Then, we adopt mixed integer programming to find the optimal solution. Experimental results based on Shanghai Telecom’s base station dataset show that our approach outperforms several representative approaches in terms of access delay and workload balancing." ] }
1907.07349
2960672606
Edge computing in the Internet of Things brings applications and content closer to the users by introducing an additional computational layer at the network infrastructure, between cloud and the resource-constrained data producing devices and user equipment. This way, the opportunistic nature of the operational environment is addressed by introducing computational power in location with low latency and high bandwidth. However, location-aware deployment of edge computing infrastructure requires careful placement scheme for edge servers. To provide the best possible Quality of Service for the user applications, their proximity needs to be optimized. Moreover, the deployment faces practical limitations in budget limitations, hardware requirements of servers and in online load balancing between servers. To address these challenges, we formulate the edge server placement as a capacitated location-allocation problem, while minimizing the distance between servers and access points of a real city-wide Wi-Fi network deployment. In our algorithm, we utilize both upper and lower server capacity constraints for load balancing. Furthermore, we enable sharing of workload between servers to facilitate deployments with low capacity servers. The performance of the algorithm is demonstrated in placement scenarios, exemplified by high capacity servers for edge computing and low capacity servers for Fog computing, with different parameters in a real-world data set. The data set consists of both dense deployment of access points in central areas, but also sparse deployment in suburban areas within the same network infrastructure. In comparison, we show that previous approaches do not sufficiently address such deployment. The presented algorithm is able to provide optimal placements that minimize the distances and provide balanced workload with sharing by following the capacity constraints.
The computing capacities of edge servers were assumed equal and fixed, except in works @cite_24 @cite_15 , that allowed scaling of the server capacity on-demand to distribute workload evenly, regardless of the resulting cluster size. In @cite_3 , no strict capacity limits were set for servers, but excessive workload can be offloaded to cloud. The studies mainly focused on average workload that can be utilized as a measure to maintain a constant QoS as all times. @cite_16 focused on worst-case workload by utilizing the maximum number of users found in the historical data. Measures to simulate the workload in different granularity were also utilized, e.g. a number of connections to the access points, total session length or total connection time and length of phone calls.
{ "cite_N": [ "@cite_24", "@cite_15", "@cite_16", "@cite_3" ], "mid": [ "2597750431", "2906068868", "2514431540", "2887279295" ], "abstract": [ "A device for selectively moistening the flap of envelopes has a moistening member and a pivoting moistening deflector selectively operable to moisten the flap or not. It further comprises a pivoting slide bar to the front of the moistening member and actuated between a protection position and a retracted position in front of the moistening member. The moistening deflector is adapted to cooperate with the slide bar and the moistening member and is coupled to the bar so that they are actuated virtually simultaneously. The device finds applications in automatic mail handling.", "Remote clouds are gradually unable to achieve ultra-low latency to meet the requirements of mobile users because of the intolerable long distance between remote clouds and mobile users and the network congestion caused by the tremendous number of users. Mobile edge computing, a new paradigm, has been proposed to mitigate aforementioned effects. Existing studies mostly assume the edge servers have been deployed properly and they just pay attention to how to minimize the delay between edge servers and mobile users. In this paper, considering the practical environment, we investigate how to deploy edge servers effectively and economically in wireless metropolitan area networks. Thus, we address the problem of minimizing the number of edge servers while ensuring some QoS requirements. Aiming at more consistence with a generalized condition, we extend the definition of the dominating set, and transform the addressed problem into the minimum dominating set problem in graph theory. In addition, two conditions are considered for the capacities of edge servers: one is that the capacities of edge servers can be configured on demand, and the other is that all the edge servers have the same capacities. For the on-demand condition, a greedy based algorithm is proposed to find the solution, and the key idea is to iteratively choose nodes that can connect as many other nodes as possible under the delay, degree and cluster size constraints. Furthermore, a simulated annealing based approach is given for global optimization. For the second condition, a greedy based algorithm is also proposed to satisfy the capacity constraint of edge servers and minimize the number of edge servers simultaneously. The simulation results show that the proposed algorithms are feasible.", "We present @math , a decision support framework to provision edge servers for online services providers (OSPs). @math takes advantage of the increasingly flexible edge server placement, which is enabled by new technologies such as edge computing platforms, cloudlets and network function virtualization, to optimize the overall performance and cost of edge infrastructures. The key difference between @math and traditional server placement approaches lies on that @math can discover proper unforeseen edge locations which significantly improve the efficiency and reduce the cost of edge provisioning. We show how @math effectively identifies promising edge locations which are close to a collection of users merely with inaccurate network distance estimation methods, e.g., geographic coordinate (GC) and network coordinate systems (NC). We also show how @math comprehensively considers various pragmatic concerns in edge provisioning, such as traffic limits by law or ISP policy, edge site deployment and resource usage cost, over-provisioning for fault tolerance, etc., with a simple optimization model. We simulate @math using real network data at global and county-wide scales. Measurement-driven simulations show that with a given cost budget @math can improve user performance by around 10-45 percent at global scale networks and 15-35 percent at a country-wide scale network.", "Edge computing provides an attractive platform for bringing data and processing closer to users in networked environments. Several edge proposals aim to place the edge servers at a couple hop distance from the client to ensure lowest possible compute and network delay. An attractive edge server placement is to co-locate it with existing (cellular) base stations to avoid additional infrastructure establishment costs. However, determining the exact locations for edge servers is an important question that must be resolved for optimal placement. In this paper, we present Anveshak1, a framework that solves the problem of placing edge servers in a geographical topology and provides the optimal solution for edge providers. Our proposed solution considers both end-user application requirements as well as deployment and operating costs incurred by edge platform providers. The placement optimization metric of Anveshak considers the request pattern of users and existing user-established edge servers. In our evaluation based on real datasets, we show that Anveshak achieves 67 increase in user satisfaction while maintaining high server utilization." ] }
1907.07349
2960672606
Edge computing in the Internet of Things brings applications and content closer to the users by introducing an additional computational layer at the network infrastructure, between cloud and the resource-constrained data producing devices and user equipment. This way, the opportunistic nature of the operational environment is addressed by introducing computational power in location with low latency and high bandwidth. However, location-aware deployment of edge computing infrastructure requires careful placement scheme for edge servers. To provide the best possible Quality of Service for the user applications, their proximity needs to be optimized. Moreover, the deployment faces practical limitations in budget limitations, hardware requirements of servers and in online load balancing between servers. To address these challenges, we formulate the edge server placement as a capacitated location-allocation problem, while minimizing the distance between servers and access points of a real city-wide Wi-Fi network deployment. In our algorithm, we utilize both upper and lower server capacity constraints for load balancing. Furthermore, we enable sharing of workload between servers to facilitate deployments with low capacity servers. The performance of the algorithm is demonstrated in placement scenarios, exemplified by high capacity servers for edge computing and low capacity servers for Fog computing, with different parameters in a real-world data set. The data set consists of both dense deployment of access points in central areas, but also sparse deployment in suburban areas within the same network infrastructure. In comparison, we show that previous approaches do not sufficiently address such deployment. The presented algorithm is able to provide optimal placements that minimize the distances and provide balanced workload with sharing by following the capacity constraints.
To maintain the sufficient QoS within the budget limitations, two main approaches were used for determining the required number of servers. First, a tolerated distance from server was decided and the number of servers was minimized given that the distance constraint is met for each access point @cite_15 @cite_30 @cite_2 . Second, the number of servers was based on budget and servers were placed so that the best proximity is obtained @cite_24 @cite_49 @cite_20 @cite_14 @cite_11 . A third approach was to minimize the distance while penalizing for the number of servers @cite_16 .
{ "cite_N": [ "@cite_30", "@cite_14", "@cite_24", "@cite_49", "@cite_2", "@cite_15", "@cite_16", "@cite_20", "@cite_11" ], "mid": [ "2792782202", "2739769312", "2597750431", "", "2894009982", "2906068868", "2514431540", "", "2809740924" ], "abstract": [ "Mobile edge computing (MEC) is an emerging technology that aims at pushing applications and content close to the users (e.g., at base stations, access points, and aggregation networks) to reduce latency, improve quality of experience, and ensure highly efficient network operation and service delivery. It principally relies on virtualization-enabled MEC servers with limited capacity at the edge of the network. One key issue is to dimension such systems in terms of server size, server number, and server operation area to meet MEC goals. In this paper, we formulate this problem as a mixed integer linear program. We then propose a graph-based algorithm that, taking into account a maximum MEC server capacity, provides a partition of MEC clusters, which consolidates as many communications as possible at the edge. We use a dataset of mobile communications to extensively evaluate them with real world spatio-temporal human dynamics. In addition to quantifying macroscopic MEC benefits, the evaluation shows that our algorithm provides MEC area partitions that largely offload the core, thus pushing the load at the edge (e.g., with 10 small MEC servers between 55 and 64 of the traffic stay at the edge), and that are well balanced through time.", "", "A device for selectively moistening the flap of envelopes has a moistening member and a pivoting moistening deflector selectively operable to moisten the flap or not. It further comprises a pivoting slide bar to the front of the moistening member and actuated between a protection position and a retracted position in front of the moistening member. The moistening deflector is adapted to cooperate with the slide bar and the moistening member and is coupled to the bar so that they are actuated virtually simultaneously. The device finds applications in automatic mail handling.", "", "Edge server placement problem is a hot topic in mobile edge computing. In this paper, we study the problem of energy-aware edge server placement and try to find a more effective placement scheme with low energy consumption. Then, we formulate the problem as a multi-objective optimization problem and devise a particle swarm optimization based energy-aware edge server placement algorithm to find the optimal solution. We evaluate the algorithm based on the real dataset from Shanghai Telecom and the results show our algorithm can reduce more than 10 energy consumption with over 15 improvement in computing resource utilization, compared to other algorithms.", "Remote clouds are gradually unable to achieve ultra-low latency to meet the requirements of mobile users because of the intolerable long distance between remote clouds and mobile users and the network congestion caused by the tremendous number of users. Mobile edge computing, a new paradigm, has been proposed to mitigate aforementioned effects. Existing studies mostly assume the edge servers have been deployed properly and they just pay attention to how to minimize the delay between edge servers and mobile users. In this paper, considering the practical environment, we investigate how to deploy edge servers effectively and economically in wireless metropolitan area networks. Thus, we address the problem of minimizing the number of edge servers while ensuring some QoS requirements. Aiming at more consistence with a generalized condition, we extend the definition of the dominating set, and transform the addressed problem into the minimum dominating set problem in graph theory. In addition, two conditions are considered for the capacities of edge servers: one is that the capacities of edge servers can be configured on demand, and the other is that all the edge servers have the same capacities. For the on-demand condition, a greedy based algorithm is proposed to find the solution, and the key idea is to iteratively choose nodes that can connect as many other nodes as possible under the delay, degree and cluster size constraints. Furthermore, a simulated annealing based approach is given for global optimization. For the second condition, a greedy based algorithm is also proposed to satisfy the capacity constraint of edge servers and minimize the number of edge servers simultaneously. The simulation results show that the proposed algorithms are feasible.", "We present @math , a decision support framework to provision edge servers for online services providers (OSPs). @math takes advantage of the increasingly flexible edge server placement, which is enabled by new technologies such as edge computing platforms, cloudlets and network function virtualization, to optimize the overall performance and cost of edge infrastructures. The key difference between @math and traditional server placement approaches lies on that @math can discover proper unforeseen edge locations which significantly improve the efficiency and reduce the cost of edge provisioning. We show how @math effectively identifies promising edge locations which are close to a collection of users merely with inaccurate network distance estimation methods, e.g., geographic coordinate (GC) and network coordinate systems (NC). We also show how @math comprehensively considers various pragmatic concerns in edge provisioning, such as traffic limits by law or ISP policy, edge site deployment and resource usage cost, over-provisioning for fault tolerance, etc., with a simple optimization model. We simulate @math using real network data at global and county-wide scales. Measurement-driven simulations show that with a given cost budget @math can improve user performance by around 10-45 percent at global scale networks and 15-35 percent at a country-wide scale network.", "", "Abstract With the rapid increase in the development of the Internet of Things and 5G networks in the smart city context, a large amount of data (i.e., big data) is expected to be generated, resulting in increased latency for the traditional cloud computing paradigm. To reduce the latency, mobile edge computing has been considered for offloading a part of the workload from mobile devices to nearby edge servers that have sufficient computation resources. Although there has been significant research in the field of mobile edge computing, little attention has been given to understanding the placement of edge servers in smart cities to optimize the mobile edge computing network performance. In this paper, we study the edge server placement problem in mobile edge computing environments for smart cities. First, we formulate the problem as a multi-objective constraint optimization problem that places edge servers in some strategic locations with the objective to make balance the workloads of edge servers and minimize the access delay between the mobile user and edge server. Then, we adopt mixed integer programming to find the optimal solution. Experimental results based on Shanghai Telecom’s base station dataset show that our approach outperforms several representative approaches in terms of access delay and workload balancing." ] }
1907.07349
2960672606
Edge computing in the Internet of Things brings applications and content closer to the users by introducing an additional computational layer at the network infrastructure, between cloud and the resource-constrained data producing devices and user equipment. This way, the opportunistic nature of the operational environment is addressed by introducing computational power in location with low latency and high bandwidth. However, location-aware deployment of edge computing infrastructure requires careful placement scheme for edge servers. To provide the best possible Quality of Service for the user applications, their proximity needs to be optimized. Moreover, the deployment faces practical limitations in budget limitations, hardware requirements of servers and in online load balancing between servers. To address these challenges, we formulate the edge server placement as a capacitated location-allocation problem, while minimizing the distance between servers and access points of a real city-wide Wi-Fi network deployment. In our algorithm, we utilize both upper and lower server capacity constraints for load balancing. Furthermore, we enable sharing of workload between servers to facilitate deployments with low capacity servers. The performance of the algorithm is demonstrated in placement scenarios, exemplified by high capacity servers for edge computing and low capacity servers for Fog computing, with different parameters in a real-world data set. The data set consists of both dense deployment of access points in central areas, but also sparse deployment in suburban areas within the same network infrastructure. In comparison, we show that previous approaches do not sufficiently address such deployment. The presented algorithm is able to provide optimal placements that minimize the distances and provide balanced workload with sharing by following the capacity constraints.
Scalability was considered from the algorithmic scalability and the resulting deployment capacity perspectives. First, the algorithmic scalability was exemplified by the number of access points and edge servers. Basic k-means algorithm was applied in @cite_24 without capacity constraints and hierarchical clustering in @cite_1 , both giving good scalability. In some works, scalability was guaranteed with a two-step approach. Data was partitioned into clusters without applying the capacity constraints, after which the servers were placed to each cluster separately @cite_3 @cite_16 . Similarly, first the servers were placed without considering the capacity constraints and then the access points were assigned to the servers @cite_41 . Such approaches save computation time, but consume memory as the allocation step is carried for the whole data set at once. In the work of @cite_30 , a dense grid was set im a geographical area and the spatial extents of the servers were obtained by merging the grid cells based on user mobility. Here the computational time depends on the number of grid cells and not on the number of access points. Thus, the method scales well on the number of access points, but not with respect to the spatial size.
{ "cite_N": [ "@cite_30", "@cite_41", "@cite_1", "@cite_3", "@cite_24", "@cite_16" ], "mid": [ "2792782202", "2916014425", "2947632778", "2887279295", "2597750431", "2514431540" ], "abstract": [ "Mobile edge computing (MEC) is an emerging technology that aims at pushing applications and content close to the users (e.g., at base stations, access points, and aggregation networks) to reduce latency, improve quality of experience, and ensure highly efficient network operation and service delivery. It principally relies on virtualization-enabled MEC servers with limited capacity at the edge of the network. One key issue is to dimension such systems in terms of server size, server number, and server operation area to meet MEC goals. In this paper, we formulate this problem as a mixed integer linear program. We then propose a graph-based algorithm that, taking into account a maximum MEC server capacity, provides a partition of MEC clusters, which consolidates as many communications as possible at the edge. We use a dataset of mobile communications to extensively evaluate them with real world spatio-temporal human dynamics. In addition to quantifying macroscopic MEC benefits, the evaluation shows that our algorithm provides MEC area partitions that largely offload the core, thus pushing the load at the edge (e.g., with 10 small MEC servers between 55 and 64 of the traffic stay at the edge), and that are well balanced through time.", "", "In this article, we propose content-centric, in-network content caching and placement approaches that leverage cooperation among edge cloud devices, content popularity, and GPS trajectory information to improve content delivery speeds, network traffic congestion, cache resource utilization efficiency, and users' quality of experience in highly populated cities. More specifically, our proposed approaches exploit collaborative filtering theory to provide accurate and efficient content popularity predictions to enable proactive in-network caching of Internet content. We propose a practical content delivery architecture that consists of standalone edge cloud devices to be deployed in the city to cache and process popular Internet content as it disseminates throughout the network. We also show that our proposed approaches ensure responsive cloud content delivery with minimized service disruption.", "Edge computing provides an attractive platform for bringing data and processing closer to users in networked environments. Several edge proposals aim to place the edge servers at a couple hop distance from the client to ensure lowest possible compute and network delay. An attractive edge server placement is to co-locate it with existing (cellular) base stations to avoid additional infrastructure establishment costs. However, determining the exact locations for edge servers is an important question that must be resolved for optimal placement. In this paper, we present Anveshak1, a framework that solves the problem of placing edge servers in a geographical topology and provides the optimal solution for edge providers. Our proposed solution considers both end-user application requirements as well as deployment and operating costs incurred by edge platform providers. The placement optimization metric of Anveshak considers the request pattern of users and existing user-established edge servers. In our evaluation based on real datasets, we show that Anveshak achieves 67 increase in user satisfaction while maintaining high server utilization.", "A device for selectively moistening the flap of envelopes has a moistening member and a pivoting moistening deflector selectively operable to moisten the flap or not. It further comprises a pivoting slide bar to the front of the moistening member and actuated between a protection position and a retracted position in front of the moistening member. The moistening deflector is adapted to cooperate with the slide bar and the moistening member and is coupled to the bar so that they are actuated virtually simultaneously. The device finds applications in automatic mail handling.", "We present @math , a decision support framework to provision edge servers for online services providers (OSPs). @math takes advantage of the increasingly flexible edge server placement, which is enabled by new technologies such as edge computing platforms, cloudlets and network function virtualization, to optimize the overall performance and cost of edge infrastructures. The key difference between @math and traditional server placement approaches lies on that @math can discover proper unforeseen edge locations which significantly improve the efficiency and reduce the cost of edge provisioning. We show how @math effectively identifies promising edge locations which are close to a collection of users merely with inaccurate network distance estimation methods, e.g., geographic coordinate (GC) and network coordinate systems (NC). We also show how @math comprehensively considers various pragmatic concerns in edge provisioning, such as traffic limits by law or ISP policy, edge site deployment and resource usage cost, over-provisioning for fault tolerance, etc., with a simple optimization model. We simulate @math using real network data at global and county-wide scales. Measurement-driven simulations show that with a given cost budget @math can improve user performance by around 10-45 percent at global scale networks and 15-35 percent at a country-wide scale network." ] }
1907.07349
2960672606
Edge computing in the Internet of Things brings applications and content closer to the users by introducing an additional computational layer at the network infrastructure, between cloud and the resource-constrained data producing devices and user equipment. This way, the opportunistic nature of the operational environment is addressed by introducing computational power in location with low latency and high bandwidth. However, location-aware deployment of edge computing infrastructure requires careful placement scheme for edge servers. To provide the best possible Quality of Service for the user applications, their proximity needs to be optimized. Moreover, the deployment faces practical limitations in budget limitations, hardware requirements of servers and in online load balancing between servers. To address these challenges, we formulate the edge server placement as a capacitated location-allocation problem, while minimizing the distance between servers and access points of a real city-wide Wi-Fi network deployment. In our algorithm, we utilize both upper and lower server capacity constraints for load balancing. Furthermore, we enable sharing of workload between servers to facilitate deployments with low capacity servers. The performance of the algorithm is demonstrated in placement scenarios, exemplified by high capacity servers for edge computing and low capacity servers for Fog computing, with different parameters in a real-world data set. The data set consists of both dense deployment of access points in central areas, but also sparse deployment in suburban areas within the same network infrastructure. In comparison, we show that previous approaches do not sufficiently address such deployment. The presented algorithm is able to provide optimal placements that minimize the distances and provide balanced workload with sharing by following the capacity constraints.
If the aim was to minimize the number of servers, optimization was carried out, for example, with different thresholds of distance @cite_15 @cite_45 or capacity @cite_15 @cite_35 . The effects of capacity constraints on intra-cluster traffic and temporal changes on workload balance is investigated in @cite_30 . In @cite_2 , also the effect of the number of access points to the energy consumption and average resource utilization was explored. In @cite_16 , the cost of the deployment was evaluated as a function of the percentage of people within a given distance from the server.
{ "cite_N": [ "@cite_30", "@cite_35", "@cite_45", "@cite_2", "@cite_15", "@cite_16" ], "mid": [ "2792782202", "", "", "2894009982", "2906068868", "2514431540" ], "abstract": [ "Mobile edge computing (MEC) is an emerging technology that aims at pushing applications and content close to the users (e.g., at base stations, access points, and aggregation networks) to reduce latency, improve quality of experience, and ensure highly efficient network operation and service delivery. It principally relies on virtualization-enabled MEC servers with limited capacity at the edge of the network. One key issue is to dimension such systems in terms of server size, server number, and server operation area to meet MEC goals. In this paper, we formulate this problem as a mixed integer linear program. We then propose a graph-based algorithm that, taking into account a maximum MEC server capacity, provides a partition of MEC clusters, which consolidates as many communications as possible at the edge. We use a dataset of mobile communications to extensively evaluate them with real world spatio-temporal human dynamics. In addition to quantifying macroscopic MEC benefits, the evaluation shows that our algorithm provides MEC area partitions that largely offload the core, thus pushing the load at the edge (e.g., with 10 small MEC servers between 55 and 64 of the traffic stay at the edge), and that are well balanced through time.", "", "", "Edge server placement problem is a hot topic in mobile edge computing. In this paper, we study the problem of energy-aware edge server placement and try to find a more effective placement scheme with low energy consumption. Then, we formulate the problem as a multi-objective optimization problem and devise a particle swarm optimization based energy-aware edge server placement algorithm to find the optimal solution. We evaluate the algorithm based on the real dataset from Shanghai Telecom and the results show our algorithm can reduce more than 10 energy consumption with over 15 improvement in computing resource utilization, compared to other algorithms.", "Remote clouds are gradually unable to achieve ultra-low latency to meet the requirements of mobile users because of the intolerable long distance between remote clouds and mobile users and the network congestion caused by the tremendous number of users. Mobile edge computing, a new paradigm, has been proposed to mitigate aforementioned effects. Existing studies mostly assume the edge servers have been deployed properly and they just pay attention to how to minimize the delay between edge servers and mobile users. In this paper, considering the practical environment, we investigate how to deploy edge servers effectively and economically in wireless metropolitan area networks. Thus, we address the problem of minimizing the number of edge servers while ensuring some QoS requirements. Aiming at more consistence with a generalized condition, we extend the definition of the dominating set, and transform the addressed problem into the minimum dominating set problem in graph theory. In addition, two conditions are considered for the capacities of edge servers: one is that the capacities of edge servers can be configured on demand, and the other is that all the edge servers have the same capacities. For the on-demand condition, a greedy based algorithm is proposed to find the solution, and the key idea is to iteratively choose nodes that can connect as many other nodes as possible under the delay, degree and cluster size constraints. Furthermore, a simulated annealing based approach is given for global optimization. For the second condition, a greedy based algorithm is also proposed to satisfy the capacity constraint of edge servers and minimize the number of edge servers simultaneously. The simulation results show that the proposed algorithms are feasible.", "We present @math , a decision support framework to provision edge servers for online services providers (OSPs). @math takes advantage of the increasingly flexible edge server placement, which is enabled by new technologies such as edge computing platforms, cloudlets and network function virtualization, to optimize the overall performance and cost of edge infrastructures. The key difference between @math and traditional server placement approaches lies on that @math can discover proper unforeseen edge locations which significantly improve the efficiency and reduce the cost of edge provisioning. We show how @math effectively identifies promising edge locations which are close to a collection of users merely with inaccurate network distance estimation methods, e.g., geographic coordinate (GC) and network coordinate systems (NC). We also show how @math comprehensively considers various pragmatic concerns in edge provisioning, such as traffic limits by law or ISP policy, edge site deployment and resource usage cost, over-provisioning for fault tolerance, etc., with a simple optimization model. We simulate @math using real network data at global and county-wide scales. Measurement-driven simulations show that with a given cost budget @math can improve user performance by around 10-45 percent at global scale networks and 15-35 percent at a country-wide scale network." ] }
1907.07349
2960672606
Edge computing in the Internet of Things brings applications and content closer to the users by introducing an additional computational layer at the network infrastructure, between cloud and the resource-constrained data producing devices and user equipment. This way, the opportunistic nature of the operational environment is addressed by introducing computational power in location with low latency and high bandwidth. However, location-aware deployment of edge computing infrastructure requires careful placement scheme for edge servers. To provide the best possible Quality of Service for the user applications, their proximity needs to be optimized. Moreover, the deployment faces practical limitations in budget limitations, hardware requirements of servers and in online load balancing between servers. To address these challenges, we formulate the edge server placement as a capacitated location-allocation problem, while minimizing the distance between servers and access points of a real city-wide Wi-Fi network deployment. In our algorithm, we utilize both upper and lower server capacity constraints for load balancing. Furthermore, we enable sharing of workload between servers to facilitate deployments with low capacity servers. The performance of the algorithm is demonstrated in placement scenarios, exemplified by high capacity servers for edge computing and low capacity servers for Fog computing, with different parameters in a real-world data set. The data set consists of both dense deployment of access points in central areas, but also sparse deployment in suburban areas within the same network infrastructure. In comparison, we show that previous approaches do not sufficiently address such deployment. The presented algorithm is able to provide optimal placements that minimize the distances and provide balanced workload with sharing by following the capacity constraints.
Simulated data sets were utilized in @cite_15 @cite_32 , where the other studies utilized real-world data sets. The data set @cite_47 consists of geo-referenced phone call detail records over the city of Milan for three months' period, which was used in @cite_24 @cite_30 @cite_21 . The Shanghai Telecom data set contains the data of mobile users accessing 3000 base stations with 4.6 million call records and 7.5 million movement traces of 10 thousand users in six successive months @cite_0 . The data set was used in @cite_41 @cite_11 @cite_36 . The data set utilized in @cite_1 consists of thousands of Wi-Fi access points in New York City. In @cite_16 , the data set was obtained through the globally-distributed Planetlab nodes and the measurement nodes, deployed in China @cite_16 .
{ "cite_N": [ "@cite_30", "@cite_47", "@cite_41", "@cite_36", "@cite_21", "@cite_1", "@cite_32", "@cite_24", "@cite_0", "@cite_15", "@cite_16", "@cite_11" ], "mid": [ "2792782202", "2190432600", "2916014425", "", "", "2947632778", "", "2597750431", "2765605909", "2906068868", "2514431540", "2809740924" ], "abstract": [ "Mobile edge computing (MEC) is an emerging technology that aims at pushing applications and content close to the users (e.g., at base stations, access points, and aggregation networks) to reduce latency, improve quality of experience, and ensure highly efficient network operation and service delivery. It principally relies on virtualization-enabled MEC servers with limited capacity at the edge of the network. One key issue is to dimension such systems in terms of server size, server number, and server operation area to meet MEC goals. In this paper, we formulate this problem as a mixed integer linear program. We then propose a graph-based algorithm that, taking into account a maximum MEC server capacity, provides a partition of MEC clusters, which consolidates as many communications as possible at the edge. We use a dataset of mobile communications to extensively evaluate them with real world spatio-temporal human dynamics. In addition to quantifying macroscopic MEC benefits, the evaluation shows that our algorithm provides MEC area partitions that largely offload the core, thus pushing the load at the edge (e.g., with 10 small MEC servers between 55 and 64 of the traffic stay at the edge), and that are well balanced through time.", "The study of socio-technical systems has been revolutionized by the unprecedented amount of digital records that are constantly being produced by human activities such as accessing Internet services, using mobile devices, and consuming energy and knowledge. In this paper, we describe the richest open multi-source dataset ever released on two geographical areas. The dataset is composed of telecommunications, weather, news, social networks and electricity data from the city of Milan and the Province of Trentino. The unique multi-source composition of the dataset makes it an ideal testbed for methodologies and approaches aimed at tackling a wide range of problems including energy consumption, mobility planning, tourist and migrant flows, urban structures and interactions, event detection, urban well-being and many others.", "", "", "", "In this article, we propose content-centric, in-network content caching and placement approaches that leverage cooperation among edge cloud devices, content popularity, and GPS trajectory information to improve content delivery speeds, network traffic congestion, cache resource utilization efficiency, and users' quality of experience in highly populated cities. More specifically, our proposed approaches exploit collaborative filtering theory to provide accurate and efficient content popularity predictions to enable proactive in-network caching of Internet content. We propose a practical content delivery architecture that consists of standalone edge cloud devices to be deployed in the city to cache and process popular Internet content as it disseminates throughout the network. We also show that our proposed approaches ensure responsive cloud content delivery with minimized service disruption.", "", "A device for selectively moistening the flap of envelopes has a moistening member and a pivoting moistening deflector selectively operable to moisten the flap or not. It further comprises a pivoting slide bar to the front of the moistening member and actuated between a protection position and a retracted position in front of the moistening member. The moistening deflector is adapted to cooperate with the slide bar and the moistening member and is coupled to the bar so that they are actuated virtually simultaneously. The device finds applications in automatic mail handling.", "Abstract Mobile edge computing is an emerging technology that provides services within the close proximity of mobile subscribers by edge servers that are deployed in each edge server. Mobile edge computing platform enables application developers and content providers to serve context-aware services (such as service recommendation) by using real time radio access network information. In service recommendation system, quality of service (QoS) prediction plays an important role when mobile devices or users want to invoke services that can satisfy user QoS requirements. However, user mobility (e.g., from one edge server to another) often makes service QoS prediction values deviate from actual values in traditional mobile networks. Unfortunately, many existing service recommendation approaches fail to consider user mobility. In this paper, we propose a service recommendation approach based on collaborative filtering and make QoS prediction based on user mobility. This approach initially calculates user or edge server similarity and selects the Top-K most-similar neighbors, predicts service QoS, and then makes service recommendation. We have implemented our proposed approach with experiments based on Shanghai Telecom datasets. Experimental results show that our approach can significantly improve on the accuracy of service recommendation in mobile edge computing.", "Remote clouds are gradually unable to achieve ultra-low latency to meet the requirements of mobile users because of the intolerable long distance between remote clouds and mobile users and the network congestion caused by the tremendous number of users. Mobile edge computing, a new paradigm, has been proposed to mitigate aforementioned effects. Existing studies mostly assume the edge servers have been deployed properly and they just pay attention to how to minimize the delay between edge servers and mobile users. In this paper, considering the practical environment, we investigate how to deploy edge servers effectively and economically in wireless metropolitan area networks. Thus, we address the problem of minimizing the number of edge servers while ensuring some QoS requirements. Aiming at more consistence with a generalized condition, we extend the definition of the dominating set, and transform the addressed problem into the minimum dominating set problem in graph theory. In addition, two conditions are considered for the capacities of edge servers: one is that the capacities of edge servers can be configured on demand, and the other is that all the edge servers have the same capacities. For the on-demand condition, a greedy based algorithm is proposed to find the solution, and the key idea is to iteratively choose nodes that can connect as many other nodes as possible under the delay, degree and cluster size constraints. Furthermore, a simulated annealing based approach is given for global optimization. For the second condition, a greedy based algorithm is also proposed to satisfy the capacity constraint of edge servers and minimize the number of edge servers simultaneously. The simulation results show that the proposed algorithms are feasible.", "We present @math , a decision support framework to provision edge servers for online services providers (OSPs). @math takes advantage of the increasingly flexible edge server placement, which is enabled by new technologies such as edge computing platforms, cloudlets and network function virtualization, to optimize the overall performance and cost of edge infrastructures. The key difference between @math and traditional server placement approaches lies on that @math can discover proper unforeseen edge locations which significantly improve the efficiency and reduce the cost of edge provisioning. We show how @math effectively identifies promising edge locations which are close to a collection of users merely with inaccurate network distance estimation methods, e.g., geographic coordinate (GC) and network coordinate systems (NC). We also show how @math comprehensively considers various pragmatic concerns in edge provisioning, such as traffic limits by law or ISP policy, edge site deployment and resource usage cost, over-provisioning for fault tolerance, etc., with a simple optimization model. We simulate @math using real network data at global and county-wide scales. Measurement-driven simulations show that with a given cost budget @math can improve user performance by around 10-45 percent at global scale networks and 15-35 percent at a country-wide scale network.", "Abstract With the rapid increase in the development of the Internet of Things and 5G networks in the smart city context, a large amount of data (i.e., big data) is expected to be generated, resulting in increased latency for the traditional cloud computing paradigm. To reduce the latency, mobile edge computing has been considered for offloading a part of the workload from mobile devices to nearby edge servers that have sufficient computation resources. Although there has been significant research in the field of mobile edge computing, little attention has been given to understanding the placement of edge servers in smart cities to optimize the mobile edge computing network performance. In this paper, we study the edge server placement problem in mobile edge computing environments for smart cities. First, we formulate the problem as a multi-objective constraint optimization problem that places edge servers in some strategic locations with the objective to make balance the workloads of edge servers and minimize the access delay between the mobile user and edge server. Then, we adopt mixed integer programming to find the optimal solution. Experimental results based on Shanghai Telecom’s base station dataset show that our approach outperforms several representative approaches in terms of access delay and workload balancing." ] }