text
stringlengths
8
3.91k
label
int64
0
10
abstract—domain adaptation algorithms are useful when the distributions of the training and the test data are different. in this paper, we focus on the problem of instrumental variation and time-varying drift in the field of sensors and measurement, which can be viewed as discrete and continuous distributional change in the feature space. we propose maximum independence domain adaptation (mida) and semi-supervised mida (smida) to address this problem. domain features are first defined to describe the background information of a sample, such as the device label and acquisition time. then, mida learns a subspace which has maximum independence with the domain features, so as to reduce the inter-domain discrepancy in distributions. a feature augmentation strategy is also designed to project samples according to their backgrounds so as to improve the adaptation. the proposed algorithms are flexible and fast. their effectiveness is verified by experiments on synthetic datasets and four realworld ones on sensors, measurement, and computer vision. they can greatly enhance the practicability of sensor systems, as well as extend the application scope of existing domain adaptation algorithms by uniformly handling different kinds of distributional change. index terms—dimensionality reduction, domain adaptation, drift correction, hilbert-schmidt independence criterion, machine olfaction, transfer learning
2
abstract—low-rank modeling has many important applications in computer vision and machine learning. while the matrix rank is often approximated by the convex nuclear norm, the use of nonconvex low-rank regularizers has demonstrated better empirical performance. however, the resulting optimization problem is much more challenging. recent state-of-the-art requires an expensive full svd in each iteration. in this paper, we show that for many commonly-used nonconvex low-rank regularizers, the singular values obtained from the proximal operator can be automatically threshold. this allows the proximal operator to be efficiently approximated by the power method. we then develop a fast proximal algorithm and its accelerated variant with inexact proximal step. a convergence rate of o(1/t ), where t is the number of iterations, can be guaranteed. furthermore, we show the proposed algorithm can be parallelized, and the resultant algorithm achieves nearly linear speedup w.r.t. the number of threads. extensive experiments are performed on matrix completion and robust principal component analysis. significant speedup over the state-of-the-art is observed. index terms—low-rank matrix learning, nonconvex regularization, proximal algorithm, parallel algorithm, matrix completion, robust principle component analysis
2
abstract most of the face recognition works focus on specific modules or demonstrate a research idea. this paper presents a pose-invariant 3d-aided 2d face recognition system (ur2d) that is robust to pose variations as large as 90◦ by leveraging deep learning technology. the architecture and the interface of ur2d are described, and each module is introduced in detail. extensive experiments are conducted on the uhdb31 and ijb-a, demonstrating that ur2d outperforms existing 2d face recognition systems such as vgg-face, facenet, and a commercial off-the-shelf software (cots) by at least 9% on the uhdb31 dataset and 3% on the ijb-a dataset on average in face identification tasks. ur2d also achieves state-of-the-art performance of 85% on the ijb-a dataset by comparing the rank-1 accuracy score from template matching. it fills a gap by providing a 3d-aided 2d face recognition system that has compatible results with 2d face recognition systems using deep learning techniques. keywords: face recognition, 3d-aided 2d face recognition, deep learning, pipeline 2010 msc: 00-01, 99-00
1
abstract. recentely, anderson and dumitrescu’s s-finiteness has attracted the interest of several authors. in this paper, we introduce the notions of sfinitely presented modules and then of s-coherent rings which are s-versions of finitely presented modules and coherent rings, respectively. among other results, we give an s-version of the classical chase’s characterization of coherent rings. we end the paper with a brief discussion on other s-versions of finitely presented modules and coherent rings. we prove that these last s-versions can be characterized in terms of localization. key words. s-finite, s-finitely presented, s-coherent modules, s-coherence rings. 2010 mathematics subject classification. 13e99.
0
abstract growth in both size and complexity of modern data challenges the applicability of traditional likelihood-based inference. composite likelihood (cl) methods address the difficulties related to model selection and computational intractability of the full likelihood by combining a number of low-dimensional likelihood objects into a single objective function used for inference. this paper introduces a procedure to combine partial likelihood objects from a large set of feasible candidates and simultaneously carry out parameter estimation. the new method constructs estimating equations balancing statistical efficiency and computing cost by minimizing an approximate distance from the full likelihood score subject to a `1 -norm penalty representing the available computing resources. this results in truncated cl equations containing only the most informative partial likelihood score terms. an asymptotic theory within a framework where both sample size and data dimension grow is developed and finite-sample properties are illustrated through numerical examples.
10
abstract in this work, we formulated a real-world problem related to sewerpipeline gas detection using the classification-based approaches. the primary goal of this work was to identify the hazardousness of sewer-pipeline to offer safe and non-hazardous access to sewer-pipeline workers so that the human fatalities, which occurs due to the toxic exposure of sewer gas components, can be avoided. the dataset acquired through laboratory tests, experiments, and various literature-sources were organized to design a predictive model that was able to identify/classify hazardous and non-hazardous situation of sewer-pipeline. to design such prediction model, several classification algorithms were used and their performances were evaluated and compared, both empirically and statistically, over the collected dataset. in addition, the performances of several ensemble methods were analyzed to understand the extent of improvement offered by these methods. the result of this comprehensive study showed that the instance-based-learning algorithm performed better than many other algorithms such as multi-layer perceptron, radial basis function network, support vector machine, reduced pruning tree, etc. similarly, it was observed that multi-scheme ensemble approach enhanced the performance of base predictors. v. k. ojha it4innovations, všb technical university of ostrava, ostrava, czech republic and dept. of computer science & engineering, jadavpur university, kolkata, india e-mail: varun.kumar.ojha@vsb.cz p. dutta dept. of computer & system sciences, visva-bharati university, india e-mail: paramartha.dutta@gmail.com a chaudhuri dept. of computer science & engineering, jadavpur university, kolkata, india e-mail: atalc23@gmail.com neural computing and applications doi: 10.1007/s00521-016-2443-0
9
abstract state machines. this approach has recently been extended to suggest a formalization of the notion of effective computation over arbitrary countable domains. the central notions are summarized herein.
6
abstract concept, which results in 1169 physical objects in total. afterwards, we utilize a cleaned subset of the project gutenberg corpus [11], which contains 3,036 english books written by 142 authors. an assumption here is that sentences in fictions are more 4
2
abstract in this paper we study the simple semi-lévy driven continuous-time generalized autoregressive conditionally heteroscedastic (ss-cogarch) process. the statistical properties of this process are characterized. this process has the potential to approximate any semi-lévy driven cogarch processes. we show that the state representation of such ss-cogarch process can be described by a random recurrence equation with periodic random coefficients. the almost sure absolute convergence of the state process is proved. the periodically stationary solution of the state process is shown which cause the volatility to be periodically stationary under some suitable conditions. also it is shown that the increments with constant length of such ss-cogarch process is itself a periodically correlated (pc) process. finally, we apply some test to investigate the pc behavior of the increments (with constant length) of the simulated samples of proposed ss-cogarch process. keywords: continuous-time garch process; semi-lévy process; periodically correlated; periodically stationary.
10
abstract
1
abstract background: the human habitat is a host where microbial species evolve, function, and continue to evolve. elucidating how microbial communities respond to human habitats is a fundamental and critical task, as establishing baselines of human microbiome is essential in understanding its role in human disease and health. recent studies on healthy human microbiome focus on particular body habitats, assuming that microbiome develop similar structural patterns to perform similar ecosystem function under same environmental conditions. however, current studies usually overlook a complex and interconnected landscape of human microbiome and limit the ability in particular body habitats with learning models of specific criterion. therefore, these methods could not capture the real-world underlying microbial patterns effectively. results: to obtain a comprehensive view, we propose a novel ensemble clustering framework to mine the structure of microbial community pattern on large-scale metagenomic data. particularly, we first build a microbial similarity network via integrating 1920 metagenomic samples from three body habitats of healthy adults. then a novel symmetric nonnegative matrix factorization (nmf) based ensemble model is proposed and applied onto the network to detect clustering pattern. extensive experiments are conducted to evaluate the effectiveness of our model on deriving microbial community with respect to body habitat and host gender. from clustering results, we observed that body habitat exhibits a strong bound but non-unique microbial structural pattern. meanwhile, human microbiome reveals different degree of structural variations over body habitat and host gender. conclusions: in summary, our ensemble clustering framework could efficiently explore integrated clustering results to accurately identify microbial communities, and provide a comprehensive view for a set of microbial communities. the clustering results indicate that structure of human microbiome is varied systematically across body habitats and host genders. such trends depict an integrated biography of microbial communities, which offer a new insight towards uncovering pathogenic model of human microbiome.
5
abstract—regions of nested loops are a common feature of high performance computing (hpc) codes. in shared memory programming models, such as openmp, these structure are the most common source of parallelism. parallelising these structures requires the programmers to make a static decision on how parallelism should be applied. however, depending on the parameters of the problem and the nature of the code, static decisions on which loop to parallelise may not be optimal, especially as they do not enable the exploitation of any runtime characteristics of the execution. changes to the iterations of the loop which is chosen to be parallelised might limit the amount of processors that can be utilised. we have developed a system that allows a code to make a dynamic choice, at runtime, of what parallelism is applied to nested loops. the system works using a source to source compiler, which we have created, to perform transformations to user’s code automatically, through a directive based approach (similar to openmp). this approach requires the programmer to specify how the loops of the region can be parallelised and our runtime library is then responsible for making the decisions dynamically during the execution of the code. our method for providing dynamic decisions on which loop to parallelise significantly outperforms the standard methods for achieving this through openmp (using if clauses) and further optimisations were possible with our system when addressing simulations where the number of iterations of the loops change during the runtime of the program or loops are not perfectly nested.
6
abstract in this paper, we propose a semi-supervised learning method where we train two neural networks in a multi-task fashion: a target network and a confidence network. the target network is optimized to perform a given task and is trained using a large set of unlabeled data that are weakly annotated. we propose to weight the gradient updates to the target network using the scores provided by the second confidence network, which is trained on a small amount of supervised data. thus we avoid that the weight updates computed from noisy labels harm the quality of the target network model. we evaluate our learning strategy on two different tasks: document ranking and sentiment classification. the results demonstrate that our approach not only enhances the performance compared to the baselines but also speeds up the learning process from weak labels.
9
abstract: materials design and development typically takes several decades from the initial discovery to commercialization with the traditional trial and error development approach. with the accumulation of data from both experimental and computational results, data based machine learning becomes an emerging field in materials discovery, design and property prediction. this manuscript reviews the history of materials science as a disciplinary the most common machine learning method used in materials science, and specifically how they are used in materials discovery, design, synthesis and even failure detection and analysis after materials are deployed in real application. finally, the limitations of machine learning for application in materials science and challenges in this emerging field is discussed. keywords: machine learning, materials discovery and design, materials synthesis, failure detection 1. introduction materials science has a long history that can date back to the bronze age 1. however, only until the 16th century, first book on metallurgy was published, marking the beginning of systematic studies in materials science 2. researches in materials science were purely empirical until theoretical models were developed. with the advent of computers in the last century, numerical methods to solve theoretical models became available, ranging from dft (density functional theory) based quantum mechanical modeling of electronic structure for optoelectronic properties calculation, to continuum based finite element modeling for mechanical properties 3-4. multiscale modeling that bridge various time and spatial scales were also developed in the materials science to better simulate the real complex system 5. even so, it takes several decades from materials discovery to development and commercialization 6-7 . even though physical modeling can reduce the amount of time by guiding experiment work. the limitation is also obvious. dft are only used for functional materials optoelectronic property calculation, and that is only limited to materials without defect 8 . the assumption itself is far off from reality. new concept such as multiscale modeling is still far away from large scale real industrial application. traditional ways of materials development are impeding the progress in this field and relevant technological industry. with the large amount of complex data generated by experiment, especially simulation results from both published and archived data including materials property value, processing conditions, and microstructural images, analyzing them all becoming increasingly challenging for researchers. inspired by the human genome initiative, obama government launched a materials genome initiative hoping to reduce current materials development time to half 9. with the increase of computing power and the development of machine learning algorithms, materials informatics has increasingly become another paradigm in the field. researchers are already using machine learning method for materials property prediction and discovery. machine learning forward model are used for materials property prediction after trained on data from experiments and physical simulations. bhadeshia et al. applied neural network(nn) technique to model creep property and phase structure in steel 10-11. crystal structure prediction is another area of study for machine learning thanks to the large amount of structural data in crystallographic database. k -nearest-
5
abstract
1
abstract
2
abstract: in this paper, an efficient offline hand written character recognition algorithm is proposed based on associative memory net (amn). the amn used in this work is basically auto associative. the implementation is carried out completely in ‘c’ language. to make the system perform to its best with minimal computation time, a parallel algorithm is also developed using an api package openmp. characters are mainly english alphabets (small (26), capital (26)) collected from system (52) and from different persons (52). the characters collected from system are used to train the amn and characters collected from different persons are used for testing the recognition ability of the net. the detailed analysis showed that the network recognizes the hand written characters with recognition rate of 72.20% in average case. however, in best case, it recognizes the collected hand written characters with 88.5%. the developed network consumes 3.57 sec (average) in serial implementation and 1.16 sec (average) in parallel implementation using openmp. keywords: offline; hand written character; associative memory net; openmp; serial; parallel.
9
abstract in large-scale modern data analysis, first-order optimization methods are usually favored to obtain sparse estimators in high dimensions. this paper performs theoretical analysis of a class of iterative thresholding based estimators defined in this way. oracle inequalities are built to show the nearly minimax rate optimality of such estimators under a new type of regularity conditions. moreover, the sequence of iterates is found to be able to approach the statistical truth within the best statistical accuracy geometrically fast. our results also reveal different benefits brought by convex and nonconvex types of shrinkage.
10
abstract. for a pair of groups g, h we study pairs of actions g on h and h on g such that these pairs are compatible and non-abelian tensor products g ⊗ h are defined.
4
abstract. we study a natural problem in graph sparsification, the spanning tree congestion (stc) problem. informally, the stc problem seeks a spanning tree with no tree-edge routing too many of the original edges. the root of this problem dates back to at least 30 years ago, motivated by applications in network design, parallel computing and circuit design. variants of the problem have also seen algorithmic applications as a preprocessing step of several important graph algorithms. for any general connected graph with n vertices and m edges, we show that √ its stc is at most o( mn), which is asymptotically optimal since we also √ demonstrate graphs with stc at least ω( mn). we present a polynomial√ time algorithm which computes a spanning tree with congestion o( mn · log n). we also present another algorithm for computing a spanning tree √ with congestion o( mn); this algorithm runs in sub-exponential time 2 when m = ω(n log n). for achieving the above results, an important intermediate theorem is generalized győri-lovász theorem, for which chen et al. [14] gave a non-constructive proof. we give the first elementary and constructive proof by providing a local search algorithm with running time o∗ (4n ), which is a key ingredient of the above-mentioned sub-exponential time algorithm. we discuss a few consequences of the theorem concerning graph partitioning, which might be of independent interest. we also show that for any graph which satisfies certain expanding properties, its stc is at most o(n), and a corresponding spanning tree can be computed in polynomial time. we then use this to show that a random graph has stc θ(n) with high probability.
8
abstract
1
abstract—scene text detection is a challenging problem in computer vision. in this paper, we propose a novel text detection network based on prevalent object detection frameworks. in order to obtain stronger semantic feature, we adopt resnet as feature extraction layers and exploit multi-level feature by combining hierarchical convolutional networks. a vertical proposal mechanism is utilized to avoid proposal classification, while regression layer remains working to improve localization accuracy. our approach evaluated on icdar2013 dataset achieves 0.91 f-measure, which outperforms previous state-ofthe-art results in scene text detection. keywords—scene text detection; deep ctpn
1
abstract we propose a way to construct fiducial distributions for a multidimensional parameter using a step-by-step conditional procedure related to the inferential importance of the components of the parameter. for discrete models, in which the nonuniqueness of the fiducial distribution is well known, we propose to use the geometric mean of the “extreme cases” and show its good behavior with respect to the more traditional arithmetic mean. connections with the generalized fiducial inference approach developed by hannig and with confidence distributions are also analyzed. the suggested procedure strongly simplifies when the statistical model belongs to a subclass of the natural exponential family, called conditionally reducible, which includes the multinomial and the negative-multinomial models. furthermore, because fiducial inference and objective bayesian analysis are both attempts to derive distributions for an unknown parameter without any prior information, it is natural to discuss their relationships. in particular, the reference posteriors, which also depend on the importance ordering of the parameters are the natural terms of comparison. we show that fiducial and reference posterior distributions coincide in the location-scale models, and we characterize the conditionally reducible natural exponential families for which this happens. the discussion of some classical examples closes the paper.
10
abstract—developers spend a significant amount of time searching for code—e.g., to understand how to complete, correct, or adapt their own code for a new context. unfortunately, the state of the art in code search has not evolved much beyond text search over tokenized source. code has much richer structure and semantics than normal text, and this property can be exploited to specialize the code-search process for better querying, searching, and ranking of code-search results. we present a new code-search engine named source forager. given a query in the form of a c/c++ function, source forager searches a pre-populated code database for similar c/c++ functions. source forager preprocesses the database to extract a variety of simple code features that capture different aspects of code. a search returns the k functions in the database that are most similar to the query, based on the various extracted code features. we tested the usefulness of source forager using a variety of code-search queries from two domains. our experiments show that the ranked results returned by source forager are accurate, and that query-relevant functions can be reliably retrieved even when searching through a large code database that contains very few query-relevant functions. we believe that source forager is a first step towards muchneeded tools that provide a better code-search experience. index terms—code search, similar code, program features.
6
abstract machine learning models, including state-of-the-art deep neural networks, are vulnerable to small perturbations that cause unexpected classification errors. this unexpected lack of robustness raises fundamental questions about their generalization properties and poses a serious concern for practical deployments. as such perturbations can remain imperceptible – the formed adversarial examples demonstrate an inherent inconsistency between vulnerable machine learning models and human perception – some prior work casts this problem as a security issue. despite the significance of the discovered instabilities and ensuing research, their cause is not well understood and no effective method has been developed to address the problem. in this paper, we present a novel theory to explain why this unpleasant phenomenon exists in deep neural networks. based on that theory, we introduce a simple, efficient, and effective training approach, batch adjusted network gradients (bang), which significantly improves the robustness of machine learning models. while the bang technique does not rely on any form of data augmentation or the utilization of adversarial images for training, the resultant classifiers are more resistant to adversarial perturbations while maintaining or even enhancing the overall classification performance.
1
abstract
9
abstract—the increasing penetration of renewable energy in recent years has led to more uncertainties in power systems. these uncertainties have to be accommodated by flexible resources (i.e. upward and downward generation reserves). in this paper, a novel concept, uncertainty marginal price (ump), is proposed to price both the uncertainty and reserve. at the same time, the energy is priced at locational marginal price (lmp). a novel market clearing mechanism is proposed to credit the generation and reserve and to charge the load and uncertainty within the robust unit commitment (ruc) in the day-ahead market. we derive the umps and lmps in the robust optimization framework. ump helps allocate the cost of generation reserves to uncertainty sources. we prove that the proposed market clearing mechanism leads to partial market equilibrium. we find that transmission reserves must be kept explicitly in addition to generation reserves for uncertainty accommodation. we prove that transmission reserves for ramping delivery may lead to financial transmission right (ftr) underfunding in existing markets. the ftr underfunding can be covered by congestion fund collected from uncertainty payment in the proposed market clearing mechanism. simulations on a six-bus system and the ieee 118-bus system are performed to illustrate the new concepts and the market clearing mechanism. index terms—uncertainty marginal price, cost causation, robust unit commitment, financial transmission right, generation reserve, transmission reserve
3
abstract. in 2010, everitt and fountain introduced the concept of reflection monoids. the boolean reflection monoids form a family of reflection monoids (symmetric inverse semigroups are boolean reflection monoids of type a). in this paper, we give a family of presentations of boolean reflection monoids and show how these presentations are compatible with mutations of certain quivers. a feature of the quivers in this paper corresponding to presentations of boolean reflection monoids is that the quivers have frozen vertices. our results recover the presentations of boolean reflection monoids given by everitt and fountain and the presentations of symmetric inverse semigroups given by popova. surprisingly, inner by diagram automorphisms of irreducible weyl groups or boolean reflection monoids can be constructed by sequences of mutations preserving the same underlying diagrams. as an application, we study the cellularity of semigroup algebras of boolean reflection monoids and construct new cellular bases of such cellular algebras using presentations we obtained and inner by diagram automorphisms of boolean reflection monoids. key words: boolean reflection monoids; presentations; mutations of quivers; inner by diagram automorphisms; cellular semigroups; cellular basis 2010 mathematics subject classification: 13f60; 20m18; 16g20; 20f55; 51f15
4
abstract. in this paper we introduce and study the conjugacy ratio of a finitely generated group, which is the limit at infinity of the quotient of the conjugacy and standard growth functions. we conjecture that the conjugacy ratio is 0 for all groups except the virtually abelian ones, and confirm this conjecture for certain residually finite groups of subexponential growth, hyperbolic groups, right-angled artin groups, and the lamplighter group.
4
abstract we propose a new paradigm for telecommunications, and develop a framework drawing on concepts from information (i.e., different metrics of complexity) and computational (i.e., agent based modeling) theory, adapted from complex system science. we proceed in a systematic fashion by dividing network complexity understanding and analysis into different layers. modelling layer forms the foundation of the proposed framework, supporting analysis and tuning layers. the modelling layer aims at capturing the significant attributes of networks and the interactions that shape them, through the application of tools such as agent-based modelling and graph theoretical abstractions, to derive new metrics that holistically describe a network. the analysis phase completes the core functionality of the framework by linking our new metrics to the overall network performance. the tuning layer augments this core with algorithms that aim at automatically guiding networks toward desired conditions. in order to maximize the impact of our ideas, the proposed approach is rooted in relevant, near-future architectures and use cases in 5g networks, i.e., internet of things (iot) and self-organizing cellular networks. index terms complex systems science, agent-based modelling, self-organization, 5g, internet of things.
3
abstract— analyzing and reconstructing driving scenarios is crucial for testing and evaluating highly automated vehicles (havs). this research analyzed left-turn / straight-driving conflicts at unprotected intersections by extracting actual vehicle motion data from a naturalistic driving database collected by the university of michigan. nearly 7,000 left turn across path - opposite direction (ltap/od) events involving heavy trucks and light vehicles were extracted and used to build a stochastic model of such ltap/od scenario, which is among the top priority light-vehicle pre-crash scenarios identified by national highway traffic safety administration (nhtsa). statistical analysis showed that vehicle type is a significant factor, whereas the change of season seems to have limited influence on the statistical nature of the conflict. the results can be used to build testing environments for havs to simulate the ltap/od crash cases in a stochastic manner.
3
abstract—a revised incremental conductance (inccond) maximum power point tracking (mppt) algorithm for pv generation systems is proposed in this paper. the commonly adopted traditional inccond method uses a constant step size for voltage adjustment and is difficult to achieve both a good tracking performance and quick elimination of the oscillations, especially under the dramatic changes of the environment conditions. for the revised algorithm, the incremental voltage change step size is adaptively adjusted based on the slope of the power-voltage (p-v) curve. an accelerating factor and a decelerating factor are further applied to adjust the voltage step change considering whether the sign of the p-v curve slope remains the same or not in a subsequent tracking step. in addition, the upper bound of the maximum voltage step change is also updated considering the information of sign changes. the revised mppt algorithm can quickly track the maximum power points (mpps) and remove the oscillation of the actual operation points around the real mpps. the effectiveness of the revised algorithm is demonstrated using a simulation. index terms—inccond mppt algorithm, fractional opencircuit/short-circuit mppt algorithm, p&o mptt algorithm, solar pv generation.
5
abstract
1
abstract canonical correlation analysis (cca) is a fundamental statistical tool for exploring the correlation structure between two sets of random variables. in this paper, motivated by the recent success of applying cca to learn low dimensional representations of high dimensional objects, we propose two losses based on the principal angles between the model spaces spanned by the sample canonical variates and their population correspondents, respectively. we further characterize the non-asymptotic error bounds for the estimation risks under the proposed error metrics, which reveal how the performance of sample cca depends adaptively on key quantities including the dimensions, the sample size, the condition number of the covariance matrices and particularly the population canonical correlation coefficients. the optimality of our uniform upper bounds is also justified by lower-bound analysis based on stringent and localized parameter spaces. to the best of our knowledge, for the first time our paper separates p1 and p2 for the first order term in the upper bounds without assuming the residual correlations are zeros. more significantly, our paper derives p1 ´ λ2k qp1 ´ λ2k`1 q{pλk ´ λk`1 q2 for the first time in the nonasymptotic cca estimation convergence rates, which is essential to understand the behavior of cca when the leading canonical correlation coefficients are close to 1.
10
abstract in the realm of multimodal communication, sign language is, and continues to be, one of the most understudied areas. in line with recent advances in the field of deep learning, there are far reaching implications and applications that neural networks can have for sign language interpretation. in this paper, we present a method for using deep convolutional networks to classify images of both the the letters and digits​ ​in​ ​american​ ​sign​ ​language. 1.​ ​introduction sign​ ​language​ ​is​ ​a​ ​unique​ ​type​ ​of​ ​communication​ ​that often​ ​goes​ ​understudied.​ ​while​ ​the​ ​translation​ ​process between​ ​signs​ ​and​ ​a​ ​spoken​ ​or​ ​written​ ​language​ ​is​ ​formally called​ ​‘interpretation,’​ ​ ​the​ ​function​ ​that​ ​interpreting​ ​plays is​ ​the​ ​same​ ​as​ ​that​ ​of​ ​translation​ ​for​ ​a​ ​spoken​ ​language.​ ​in our​ ​research,​ ​we​ ​look​ ​at​ ​american​ ​sign​ ​language​ ​(asl), which​ ​is​ ​used​ ​in​ ​the​ ​usa​ ​and​ ​in​ ​english-speaking​ ​canada and​ ​has​ ​many​ ​different​ ​dialects.​ ​there​ ​are​ ​22​ ​handshapes that​ ​correspond​ ​to​ ​the​ ​26​ ​letters​ ​of​ ​the​ ​alphabet,​ ​and​ ​you can​ ​sign​ ​the​ ​10​ ​digits​ ​on​ ​one​ ​hand.
1
abstract a systematic convolutional encoder of rate (n − 1)/n and maximum degree d generates a code of free distance at most d = d + 2 and, at best, a column distance profile (cdp) of [2, 3, . . . , d]. a code is maximum distance separable (mds) if it possesses this cdp. applied on a communication channel over which packets are transmitted sequentially and which loses (erases) packets randomly, such a code allows the recovery from any pattern of j erasures in the first j n-packet blocks for j < d, with a delay of at most j blocks counting from the first erasure. this paper addresses the problem of finding the largest d for which a systematic rate (n − 1)/n code over gf(2m ) exists, for given n and m. in particular, constructions for rates (2m − 1)/2m and (2m−1 − 1)/2m−1 are presented which provide optimum values of d equal to 3 and 4, respectively. a search algorithm is also developed, which produces new codes for d for field sizes 2m ≤ 214 . using a complete search version of the algorithm, the maximum value of d, and codes that achieve it, are determined for all code rates ≥ 1/2 and every field size gf(2m ) for m ≤ 5 (and for some rates for m = 6).
7
abstract. aschbacher’s program for the classification of simple fusion systems of “odd” type at the prime 2 has two main stages: the classification of 2-fusion systems of subintrinsic component type and the classification of 2-fusion systems of j-component type. we make a contribution to the latter stage by classifying 2-fusion systems with a j-component isomorphic to the 2-fusion systems of several sporadic groups under the assumption that the centralizer of this component is cyclic.
4
abstract we consider the problem of predicting the next observation given a sequence of past observations, and consider the extent to which accurate prediction requires complex algorithms that explicitly leverage long-range dependencies. perhaps surprisingly, our positive results show that for a broad class of sequences, there is an algorithm that predicts well on average, and bases its predictions only on the most recent few observation together with a set of simple summary statistics of the past observations. specifically, we show that for any distribution over observations, if the mutual information between past observations and future observations is upper bounded by i, then a simple markov √ model over the most recent i/ observations obtains expected kl error —and hence `1 error —with respect to the optimal predictor that has access to the entire past and knows the data generating distribution. for a hidden markov model with n hidden states, i is bounded by log n, a quantity that does not depend on the mixing time, and we show that the trivial prediction algorithm based on the empirical frequencies of length o(log n/) windows of observations achieves this error, provided the length of the sequence is dω(log n/) , where d is the size of the observation alphabet. we also establish that this result cannot be improved upon, even for the class of hmms, in the following two senses: first, for hmms with n hidden states, a window length √ of log n/ is information-theoretically necessary to achieve expected kl error , or `1 error . second, the dθ(log n/) samples required to accurately estimate the markov model when observations are drawn from an alphabet of size d is necessary for any computationally tractable learning/prediction algorithm, assuming the hardness of strongly refuting a certain class of csps.
2
abstract this paper proposes an efficient and novel method to address range search on multidimensional points in θ(t) time, where t is the number of points reported in <k space. this is accomplished by introducing a new data structure, called bits k d-tree. this structure also supports fast updation that takes θ(1) time for insertion and o(log n) time for deletion. the earlier best known algorithm for this problem is o(logk n + t) time [5, 15] in the pointer machine model. keywords: bits k d-tree, threaded trie, range search.
8
abstract consider the classical problem of information dissemination: one (or more) nodes in a network have some information that they want to distribute to the remainder of the network. in this paper, we study the cost of information dissemination in networks where edges have latencies, i.e., sending a message from one node to another takes some amount of time. we first generalize the idea of conductance to weighted graphs by defining φ∗ to be the “critical conductance” and `∗ to be the “critical latency”. one goal of this paper is to argue that φ∗ characterizes the connectivity of a weighted graph with latencies in much the same way that conductance characterizes the connectivity of unweighted graphs. we give near tight lower and upper bounds on the problem of information dissemination, up to polylogarithmic factors. specifically, we show that in a graph with (weighted) diameter d (with latencies as weights) and maximum degree ∆, any information dissemination algorithm requires at least ω(min(d+ ∆, `∗ /φ∗ )) time in the worst case. we show several variants of the lower bound (e.g., for graphs with small diameter, graphs with small max-degree, etc.) by reduction to a simple combinatorial game. we then give nearly matching algorithms, showing that information dissemination can be solved in o(min((d + ∆) log3 n, (`∗ /φ∗ ) log n) time. this is achieved by combining two cases. we show that the classical push-pull algorithm is (near) optimal when the diameter or the maximum degree is large. for the case where the diameter and the maximum degree are small, we give an alternative strategy in which we first discover the latencies and then use an algorithm for known latencies based on a weighted spanner construction. (our algorithms are within polylogarithmic factors of being tight both for known and unknown latencies.) while it is easiest to express our bounds in terms of φ∗ and `∗ , in some cases they do not provide the most convenient definition of conductance in weighted graphs. therefore we give a second (nearly) equivalent characterization, namely the average conductance φavg .
8
abstract—this work targets the problem of odor source localization by multi-agent systems. a hierarchical cooperative control has been put forward to solve the problem of locating source of an odor by driving the agents in consensus when at least one agent obtains information about location of the source. synthesis of the proposed controller has been carried out in a hierarchical manner of group decision making, path planning and control. decision making utilizes information of the agents using conventional particle swarm algorithm and information of the movement of filaments to predict the location of the odor source. the predicted source location in the decision level is then utilized to map a trajectory and pass that information to the control level. the distributed control layer uses sliding mode controllers known for their inherent robustness and the ability to reject matched disturbances completely. two cases of movement of agents towards the source, i.e., under consensus and formation have been discussed herein. finally, numerical simulations demonstrate the efficacy of the proposed hierarchical distributed control. index terms—odor source localization, multi-agent systems (mas), sliding mode control (smc), homogeneous agents, cooperative control.
3
abstract many applications infer the structure of a probabilistic graphical model from data to elucidate the relationships between variables. but how can we train graphical models on a massive data set? in this paper, we show how to construct coresets—compressed data sets which can be used as proxy for the original data and have provably bounded worst case error—for gaussian dependency networks (dns), i.e., cyclic directed graphical models over gaussians, where the parents of each variable are its markov blanket. specifically, we prove that gaussian dns admit coresets of size independent of the size of the data set. unfortunately, this does not extend to dns over members of the exponential family in general. as we will prove, poisson dns do not admit small coresets. despite this worst-case result, we will provide an argument why our coreset construction for dns can still work well in practice on count data. to corroborate our theoretical results, we empirically evaluated the resulting core dns on real data sets. the results demonstrate significant gains over no or naive sub-sampling, even in the case of count data.
2
abstract—in animal behavioral biology, there are several cases in which an autonomous observing/training system would be useful. 1) observation of certain species continuously, or for documenting specific events, which happen irregularly; 2) longterm intensive training of animals in preparation for behavioral experiments; and 3) training and testing of animals without human interference, to eliminate potential cues and biases induced by humans. the primary goal of this study is to build a system named catos (computer aided training/observing system) that could be used in the above situations. as a proof of concept, the system was built and tested in a pilot experiment, in which cats were trained to press three buttons differently in response to three different sounds (human speech) to receive food rewards. the system was built in use for about 6 months, successfully training two cats. one cat learned to press a particular button, out of three buttons, to obtain the food reward with over 70 percent correctness. index terms—animal training, animal observing, automatic device
5
abstract we consider the problem of learning high-dimensional gaussian graphical models. the graphical lasso is one of the most popular methods for estimating gaussian graphical models. however, it does not achieve the oracle rate of convergence. in this paper, we propose the graphical nonconvex optimization for optimal estimation in gaussian graphical models, which is then approximated by a sequence of convex programs. our proposal is computationally tractable and produces an estimator that achieves the oracle rate of convergence. the statistical error introduced by the sequential approximation using the convex programs are clearly demonstrated via a contraction property. the rate of convergence can be further improved using the notion of sparsity pattern. the proposed methodology is then extended to semiparametric graphical models. we show through numerical studies that the proposed estimator outperforms other popular methods for estimating gaussian graphical models.
10
abstract gaussian processes (gp) are widely used as a metamodel for emulating time-consuming computer codes. we focus on problems involving categorical inputs, with a potentially large number l of levels (typically several tens), partitioned in g  l groups of various sizes. parsimonious covariance functions, or kernels, can then be defined by block covariance matrices t with constant covariances between pairs of blocks and within blocks. however, little is said about the positive definiteness of such matrices, which may limit their practical usage. in this paper, we exploit the hierarchy group/level and provide a parameterization of valid block matrices t, based on a nested bayesian linear model. the same model can be used when the assumption within blocks is relaxed, giving a flexible parametric family of valid covariance matrices with constant covariances between pairs of blocks. as a by-product, we show that the positive definiteness of t is equivalent to the positive definiteness of a small matrix of size g, obtained by averaging each block. we illustrate with an application in nuclear engineering, where one of the categorical inputs is the atomic number in mendeleev’s periodic table and has more than 90 levels.
10
abstract—ldpc codes are used in many applications, however, their error correcting capabilities are limited by the presence of stopping sets and trappins sets. trappins sets and stopping sets occur when specific low-wiehgt error patterns cause a decoder to fail. trapping sets were first discovered with investigation of the error floor of the margulis code. possible solutions are constructions which avoid creating trapping sets, such as progressive edge growth (peg), or methods which remove trapping sets from existing constructions, such as graph covers. this survey examines trapping sets and stopping sets in ldpc codes over channels such as bsc, bec and awgnc. index terms—ldpc codes, trapping sets, stopping sets, qcldpc codes, margulis codes, awgnc, peg algorithm, graph covers.
7
abstract
2
abstract. this paper introduces a new and novel radar interferometry based on doppler synthetic aperture radar (doppler-sar) paradigm. conventional sar interferometry relies on wideband transmitted waveforms to obtain high range resolution. topography of a surface is directly related to the range difference between two antennas configured at different positions. doppler-sar is a novel imaging modality that uses ultra-narrowband continuous waves (uncw). it takes advantage of high resolution doppler information provided by uncws to form high resolution sar images. we introduced the theory of doppler-sar interferometry. we derived interferometric phase model and develop the equations of height mapping. unlike conventional sar interferometry, we show that the topography of a scene is related to the difference in doppler between two antennas configured at different velocities. while the conventional sar interferometry uses range, doppler and doppler due to interferometric phase in height mapping, doppler-sar interferometry uses doppler, doppler-rate and doppler-rate due to interferometric phase in height mapping. we demonstrate our theory in numerical simulations. doppler-sar interferometry offers the advantages of long-range, robust, environmentally friendly operations; low-power, low-cost, lightweight systems suitable for low-payload platforms, such as micro-satellites; and passive applications using sources of opportunity transmitting uncw.
5
abstract), dans proc. 48th acm stoc (2016), 684–697.
4
abstract—the cholesky decomposition plays an important role in finding the inverse of the correlation matrices. as it is a fast and numerically stable for linear system solving, inversion, and factorization compared to singular valued decomposition (svd), qr factorization and lu decomposition. as different methods exist to find the cholesky decomposition of a given matrix, this paper presents the comparative study of a proposed rchol algorithm with the conventional methods. the rchol algorithm is an explicit way to estimate the modified cholesky factors of a dynamic correlation matrix.
0
abstract. we recast euclid’s proof of the infinitude of prime numbers as a euclidean criterion for a domain to have infinitely many atoms. we make connections with furstenberg’s “topological” proof of the infinitude of prime numbers and show that our criterion applies even in certain domains in which not all nonzero nonunits factor into products of irreducibles.
0
abstract—new sufficient conditions for determining in closed form the capacity region of point-to-point memoryless two-way channels (twcs) are derived. the proposed conditions not only relax shannon’s condition which can identify only twcs with a certain symmetry property but also generalize other existing results. examples are given to demonstrate the advantages of the proposed conditions. index terms—network information theory, two-way channels, capacity region, inner and outer bounds, channel symmetry.
7
abstract this paper deals with the reducibility property of semidirect products of the form v ∗ d relatively to graph equation systems, where d denotes the pseudovariety of definite semigroups. we show that, if the pseudovariety v is reducible with respect to the canonical signature κ consisting of the multiplication and the (ω − 1)-power, then v ∗ d is also reducible with respect to κ. keywords. pseudovariety, definite semigroup, semidirect product, implicit signature, graph equations, reducibility.
4
abstract. we classify all convex polyomino ideals which are linearly related or have a linear resolution. convex stack polyominoes whose ideals are extremal gorenstein are also classified. in addition, we characterize, in combinatorial terms, the distributive lattices whose join-meet ideals are extremal gorenstein or have a linear resolution.
0
abstract—clustered distributed storage models real data centers where intra- and cross-cluster repair bandwidths are different. in this paper, exact-repair minimum-storage-regenerating (msr) codes achieving capacity of clustered distributed storage are designed. focus is given on two cases:  = 0 and  = 1/(n−k), where  is the ratio of the available cross- and intra-cluster repair bandwidths, n is the total number of distributed nodes and k is the number of contact nodes in data retrieval. the former represents the scenario where cross-cluster communication is not allowed, while the latter corresponds to the case of minimum cross-cluster bandwidth that is possible under the minimum storage overhead constraint. for the  = 0 case, two types of locally repairable codes are proven to achieve the msr point. as for  = 1/(n − k), an explicit msr coding scheme is suggested for the two-cluster situation under the specific of condition of n = 2k.
7
abstract given a pattern w and a text t, the speed of a pattern matching algorithm over t with regard to w, is the ratio of the length of t to the number of text accesses performed to search w into t. we first propose a general method for computing the limit of the expected speed of pattern matching algorithms, with regard to w, over iid texts. next, we show how to determine the greatest speed which can be achieved among a large class of algorithms, altogether with an algorithm running this speed. since the complexity of this determination makes it impossible to deal with patterns of length greater than 4, we propose a polynomial heuristic. finally, our approaches are compared with 9 pre-existing pattern matching algorithms from both a theoretical and a practical point of view, i.e. both in terms of limit expected speed on iid texts, and in terms of observed average speed on real data. in all cases, the pre-existing algorithms are outperformed.
8
abstract this paper presents a framework for intrinsic point of interest discovery from trajectory databases. intrinsic points of interest are regions of a geospatial area innately defined by the spatial and temporal aspects of trajectory data, and can be of varying size, shape, and resolution. any trajectory database exhibits such points of interest, and hence are intrinsic, as compared to most other point of interest definitions which are said to be extrinsic, as they require trajectory metadata, external knowledge about the region the trajectories are observed, or other application-specific information. spatial and temporal aspects are qualities of any trajectory database, making the framework applicable to data from any domain and of any resolution. the framework is developed under recent developments on the consistency of nonparametric hierarchical density estimators and enables the possibility of formal statistical inference and evaluation over such intrinsic points of interest. comparisons of the pois uncovered by the framework in synthetic truth data to thousands of parameter settings for common poi discovery methods show a marked improvement in fidelity without the need to tune any parameters by hand. acm reference format: matthew piekenbrock and derek doran. 2016. intrinsic point of interest discovery from trajectory data. in proceedings of acm conference, washington, dc, usa, july 2017 (conference’17), 10 pages. doi: 10.1145/nnnnnnn.nnnnnnn
2
abstract particle swarm optimisation is a metaheuristic algorithm which finds reasonable solutions in a wide range of applied problems if suitable parameters are used. we study the properties of the algorithm in the framework of random dynamical systems which, due to the quasi-linear swarm dynamics, yields analytical results for the stability properties of the particles. such considerations predict a relationship between the parameters of the algorithm that marks the edge between convergent and divergent behaviours. comparison with simulations indicates that the algorithm performs best near this margin of instability.
9
abstract we address the detection of a low rank n × ndeterministic matrix x0 from the noisy observation x0 + z when n → ∞, where z is a complex gaussian random matrix with independent identically distributed nc (0, n1 ) entries. thanks to large random matrix theory results, it is now well-known that if the largest singular value λ1 (x0 ) of x0 verifies λ1 (x0 ) > 1, then it is possible to exhibit consistent tests. in this contribution, we prove a contrario that under the condition λ1 (x0 ) < 1, there are no consistent tests. our proof is inspired by previous works devoted to the case of rank 1 matrices x0 . index terms statistical detection tests, large random matrices, large deviation principle.
10
abstract automated analysis methods are crucial aids for monitoring and defending a network to protect the sensitive or confidential data it hosts. this work introduces a flexible, powerful, and unsupervised approach to detecting anomalous behavior in computer and network logs; one that largely eliminates domain-dependent feature engineering employed by existing methods. by treating system logs as threads of interleaved “sentences” (event log lines) to train online unsupervised neural network language models, our approach provides an adaptive model of normal network behavior. we compare the effectiveness of both standard and bidirectional recurrent neural network language models at detecting malicious activity within network log data. extending these models, we introduce a tiered recurrent architecture, which provides context by modeling sequences of users’ actions over time. compared to isolation forest and principal components analysis, two popular anomaly detection algorithms, we observe superior performance on the los alamos national laboratory cyber security dataset. for log-line-level red team detection, our best performing character-based model provides test set area under the receiver operator characteristic curve of 0.98, demonstrating the strong fine-grained anomaly detection performance of this approach on open vocabulary logging sources.
9
abstract—when a human matches two images, the viewer has a natural tendency to view the wide area around the target pixel to obtain clues of right correspondence. however, designing a matching cost function that works on a large window in the same way is difficult. the cost function is typically not intelligent enough to discard the information irrelevant to the target pixel, resulting in undesirable artifacts. in this paper, we propose a novel convolutional neural network (cnn) module to learn a stereo matching cost with a large-sized window. unlike conventional pooling layers with strides, the proposed per-pixel pyramid-pooling layer can cover a large area without a loss of resolution and detail. therefore, the learned matching cost function can successfully utilize the information from a large area without introducing the fattening effect. the proposed method is robust despite the presence of weak textures, depth discontinuity, illumination, and exposure difference. the proposed method achieves near-peak performance on the middlebury benchmark. index terms—stereo matching,pooling,cnn
1
abstract this paper considers two brownian motions in a situation where one is correlated to the other with a slight delay. we study the problem of estimating the time lag parameter between these brownian motions from their high-frequency observations, which are possibly subject to measurement errors. the measurement errors are assumed to be i.i.d., centered gaussian and independent of the latent processes. we investigate the asymptotic structure of the likelihood ratio process for this model when the lag parameter is asymptotically infinitesimal. we show that the structure of the limit experiment depends on the level of the measurement errors: if the measurement errors locally dominate the latent brownian motions, the model enjoys the lan property. otherwise, the limit experiment does not result in typical ones appearing in the literature. we also discuss the efficient estimation of the lag parameter to highlight the statistical implications. keywords and phrases: asymptotic efficiency; endogenous noise; lead-lag effect; local asymptotic normality; microstructure noise.
10
abstract we provide a new computationally-efficient class of estimators for risk minimization. we show that these estimators are robust for general statistical models: in the classical huber ǫ-contamination model and in heavy-tailed settings. our workhorse is a novel robust variant of gradient descent, and we provide conditions under which our gradient descent variant provides accurate estimators in a general convex risk minimization problem. we provide specific consequences of our theory for linear regression, logistic regression and for estimation of the canonical parameters in an exponential family. these results provide some of the first computationally tractable and provably robust estimators for these canonical statistical models. finally, we study the empirical performance of our proposed methods on synthetic and real datasets, and find that our methods convincingly outperform a variety of baselines.
2
abstract this work introduces a novel framework for quantifying the presence and strength of recurrent dynamics in video data. specifically, we provide continuous measures of periodicity (perfect repetition) and quasiperiodicity (superposition of periodic modes with non-commensurate periods), in a way which does not require segmentation, training, object tracking or 1-dimensional surrogate signals. our methodology operates directly on video data. the approach combines ideas from nonlinear time series analysis (delay embeddings) and computational topology (persistent homology), by translating the problem of finding recurrent dynamics in video data, into the problem of determining the circularity or toroidality of an associated geometric space. through extensive testing, we show the robustness of our scores with respect to several noise models/levels; we show that our periodicity score is superior to other methods when compared to human-generated periodicity rankings; and furthermore, we show that our quasiperiodicity score clearly indicates the presence of biphonation in videos of vibrating vocal folds, which has never before been accomplished end to end quantitatively.
1
abstract there is no denying the tremendous leap in the performance of machine learning methods in the past half-decade. some might even say that specific sub-fields in pattern recognition, such as machine-vision, are as good as solved, reaching human and super-human levels. arguably, lack of training data and computation power are all that stand between us and solving the remaining ones. in this position paper we underline cases in vision which are challenging to machines and even to human observers. this is to show limitations of contemporary models that are hard to ameliorate by following the current trend to increase training data, network capacity or computational power. moreover, we claim that attempting to do so is in principle a suboptimal approach. we provide a taster of such examples in hope to encourage and challenge the machine learning community to develop new directions to solve the said difficulties.
1
abstract motivated by recent work on ordinal embedding (kleindessner and von luxburg, 2014), we derive large sample consistency results and rates of convergence for the problem of embedding points based on triple or quadruple distance comparisons. we also consider a variant of this problem where only local comparisons are provided. finally, inspired by (jamieson and nowak, 2011), we bound the number of such comparisons needed to achieve consistency. keywords: ordinal embedding, non-metric multidimensional scaling (mds), dissimilarity comparisons, landmark multidimensional scaling.
10
abstract we study one dimension in program evolution, namely the evolution of the datatype declarations in a program. to this end, a suite of basic transformation operators is designed. we cover structure-preserving refactorings, but also structure-extending and -reducing adaptations. both the object programs that are subject to datatype transformations, and the meta programs that encode datatype transformations are functional programs.
6
abstract in some applications, the variance of additive measurement noise depends on the signal that we aim to measure. for instance, additive gaussian signal-dependent noise (agsdn) channel models are used in molecular and optical communication. herein we provide lower and upper bounds on the capacity of additive signal-dependent noise (asdn) channels. the idea of the first lower bound is the extension of the majorization inequality, and for the second one, it uses some calculations based on the fact that h (y ) > h (y |z). both of them are valid for all additive signal-dependent noise (asdn) channels defined in the paper. the upper bound is based on a previous idea of the authors (“symmetric relative entropy”) and is used for the additive gaussian signal-dependent noise (agsdn) channels. these bounds indicate that in asdn channels (unlike the classical awgn channels), the capacity does not necessarily become larger by making the variance function of the noise smaller. we also provide sufficient conditions under which the capacity becomes infinity. this is complemented by a number of conditions that imply capacity is finite and a unique capacity achieving measure exists (in the sense of the output measure). keywords: signal-dependent noise channels, molecular communication, channels with infinite capacity, existence of capacity-achieving distribution.
7
abstract we introduce a new model of stochastic bandits with adversarial corruptions which aims to capture settings where most of the input follows a stochastic pattern but some fraction of it can be adversarially changed to trick the algorithm, e.g., click fraud, fake reviews and email spam. the goal of this model is to encourage the design of bandit algorithms that (i) work well in mixed adversarial and stochastic models, and (ii) whose performance deteriorates gracefully as we move from fully stochastic to fully adversarial models. in our model, the rewards for all arms are initially drawn from a distribution and are then altered by an adaptive adversary. we provide a simple algorithm whose performance gracefully degrades with the total corruption the adversary injected in the data, measured by the sum across rounds of the biggest alteration the adversary made in the data in that round; this total corruption is denoted by c. our algorithm provides a guarantee that retains the optimal guarantee (up to a logarithmic term) if the input is stochastic and whose performance degrades linearly to the amount of corruption c, while crucially being agnostic to it. we also provide a lower bound showing that this linear degradation is necessary if the algorithm achieves optimal performance in the stochastic setting (the lower bound works even for a known amount of corruption, a special case in which our algorithm achieves optimal performance without the extra logarithm).
8
abstract rectified linear units, or relus, have become the preferred activation function for artificial neural networks. in this paper we consider two basic learning problems assuming that the underlying data follow a generative model based on a relu-network – a neural network with relu activations. as a primarily theoretical study, we limit ourselves to a single-layer network. the first problem we study corresponds to dictionary-learning in the presence of nonlinearity (modeled by the relu functions). given a set of observation vectors yi ∈ rd , i = 1, 2, . . . , n, we aim to recover d × k matrix a and the latent vectors {ci } ⊂ rk under the model yi = relu(aci + b), where b ∈ rd is a random bias. we show that it is possible to recover the column space of a within an error of o(d) (in frobenius norm) under certain conditions on the probability distribution of b. the second problem we consider is that of robust recovery of the signal in the presence of outliers, i.e., large but sparse noise. in this setting we are interested in recovering the latent vector c from its noisy nonlinear sketches of the form v = relu(ac) + e + w, where e ∈ rd denotes the outliers with sparsity s and w ∈ rd denote the dense but small noise. this line of work has recently been studied (soltanolkotabi, 2017) without the presence of outliers. for this problem, we q show that a generalized lasso algorithm is able to recover the signal c ∈ rk within an ℓ2 error of o( random gaussian matrix.
7
abstract in reinforcement learning, agents learn by performing actions and observing their outcomes. sometimes, it is desirable for a human operator to interrupt an agent in order to prevent dangerous situations from happening. yet, as part of their learning process, agents may link these interruptions, that impact their reward, to specific states and deliberately avoid them. the situation is particularly challenging in a multi-agent context because agents might not only learn from their own past interruptions, but also from those of other agents. orseau and armstrong [16] defined safe interruptibility for one learner, but their work does not naturally extend to multi-agent systems. this paper introduces dynamic safe interruptibility, an alternative definition more suited to decentralized learning problems, and studies this notion in two learning frameworks: joint action learners and independent learners. we give realistic sufficient conditions on the learning algorithm to enable dynamic safe interruptibility in the case of joint action learners, yet show that these conditions are not sufficient for independent learners. we show however that if agents can detect interruptions, it is possible to prune the observations to ensure dynamic safe interruptibility even for independent learners.
2
abstract in this article, we consider the general problem of checking the correctness of matrix multiplication. given three n × n matrices a, b, and c, the goal is to verify that a × b = c without carrying out the computationally costly operations of matrix multiplication and comparing the product a × b with c, term by term. this is especially important when some or all of these matrices are very large, and when the computing environment is prone to soft errors. here we extend freivalds’ algorithm to a gaussian variant of freivalds’ algorithm (gvfa) by projecting the product a × b as well as c onto a gaussian random vector and then comparing the resulting vectors. the computational complexity of gvfa is consistent with that of freivalds’ algorithm, which is o(n2 ). however, unlike freivalds’ algorithm, whose probability of a false positive is 2−k , where k is the number of iterations. our theoretical analysis shows that when a × b 6= c, gvfa produces a false positive on set of inputs of measure zero with exact arithmetic. when we introduce round-off error and floating point arithmetic into our analysis, we can show that the larger this error, the higher the probability that gvfa avoids false positives. hao ji department of computer science old dominion university e-mail: hji@cs.odu.edu michael mascagni departments of computer science, mathematics, and scientific computing florida state university applied and computational mathematics division national institute of standards and technology e-mail: mascagni@fsu.edu yaohang li department of computer science old dominion university tel.: 757-683-7721 fax: 757-683-4900 e-mail: yaohang@cs.odu.edu
8
abstract. we offer a general bayes theoretic framework to tackle the model selection problem under a two-step prior design: the first-step prior serves to assess the model selection uncertainty, and the secondstep prior quantifies the prior belief on the strength of the signals within the model chosen from the first step. we establish non-asymptotic oracle posterior contraction rates under (i) a new bernstein-inequality condition on the log likelihood ratio of the statistical experiment, (ii) a local entropy condition on the dimensionality of the models, and (iii) a sufficient mass condition on the second-step prior near the best approximating signal for each model. the first-step prior can be designed generically. the resulting posterior mean also satisfies an oracle inequality, thus automatically serving as an adaptive point estimator in a frequentist sense. model mis-specification is allowed in these oracle rates. the new bernstein-inequality condition not only eliminates the convention of constructing explicit tests with exponentially small type i and ii errors, but also suggests the intrinsic metric to use in a given statistical experiment, both as a loss function and as an entropy measurement. this gives a unified reduction scheme for many experiments considered in [23] and beyond. as an illustration for the scope of our general results in concrete applications, we consider (i) trace regression, (ii) shape-restricted isotonic/convex regression, (iii) high-dimensional partially linear regression and (iv) covariance matrix estimation in the sparse factor model. these new results serve either as theoretical justification of practical prior proposals in the literature, or as an illustration of the generic construction scheme of a (nearly) minimax adaptive estimator for a multi-structured experiment.
10
abstract. this paper introduces the first deep neural network-based estimation metric for the jigsaw puzzle problem. given two puzzle piece edges, the neural network predicts whether or not they should be adjacent in the correct assembly of the puzzle, using nothing but the pixels of each piece. the proposed metric exhibits an extremely high precision even though no manual feature extraction is performed. when incorporated into an existing puzzle solver, the solution’s accuracy increases significantly, achieving thereby a new state-of-the-art standard.
1
abstract this paper proposes distributed algorithms for multi-agent networks to achieve a solution in finite time to a linear equation ax = b where a has full row rank, and with the minimum l1 -norm in the underdetermined case (where a has more columns than rows). the underlying network is assumed to be undirected and fixed, and an analytical proof is provided for the proposed algorithm to drive all agents’ individual states to converge to a common value, viz a solution of ax = b, which is the minimum l1 norm solution in the underdetermined case. numerical simulations are also provided as validation of the proposed algorithms.
3
abstract this paper studies unmanned aerial vehicle (uav) aided wireless communication systems where a uav supports uplink communications of multiple ground nodes (gns) while flying over the area of the interest. in this system, the propulsion energy consumption at the uav is taken into account so that the uav’s velocity and acceleration should not exceed a certain threshold. we formulate the minimum average rate maximization problem and the energy efficiency (ee) maximization problem by jointly optimizing the trajectory, velocity, and acceleration of the uav and the uplink transmit power at the gns. as these problems are non-convex in general, we employ the successive convex approximation (sca) techniques. to this end, proper convex approximations for the non-convex constraints are derived, and iterative algorithms are proposed which converge to a local optimal point. numerical results demonstrate that the proposed algorithms outperform baseline schemes for both problems. especially for the ee maximization problem, the proposed algorithm exhibits about 109 % gain over the baseline scheme.
7
abstract. many high-dimensional uncertainty quantification problems are solved by polynomial dimensional decomposition (pdd), which represents fourier-like series expansion in terms of random orthonormal polynomials with increasing dimensions. this study constructs dimension-wise and orthogonal splitting of polynomial spaces, proves completeness of polynomial orthogonal basis for prescribed assumptions, and demonstrates mean-square convergence to the correct limit – all associated with pdd. a second-moment error analysis reveals that pdd cannot commit larger error than polynomial chaos expansion (pce) for the appropriately chosen truncation parameters. from the comparison of computational efforts, required to estimate with the same precision the variance of an output function involving exponentially attenuating expansion coefficients, the pdd approximation can be markedly more efficient than the pce approximation. key words. uncertainty quantification, anova decomposition, multivariate orthogonal polynomials, polynomial chaos expansion.
10
abstract over the recent years, the field of whole metagenome shotgun sequencing has witnessed significant growth due to the high-throughput sequencing technologies that allow sequencing genomic samples cheaper, faster, and with better coverage than before. this technical advancement has initiated the trend of sequencing multiple samples in different conditions or environments to explore the similarities and dissimilarities of the microbial communities. examples include the human microbiome project and various studies of the human intestinal tract. with the availability of ever larger databases of such measurements, finding samples similar to a given query sample is becoming a central operation. in this paper, we develop a content-based exploration and retrieval method for whole metagenome sequencing samples. we apply a distributed string mining framework to efficiently extract all informative sequence k-mers from a pool of metagenomic samples and use them to measure the dissimilarity between two samples. we evaluate the performance of the proposed approach on two human gut metagenome data sets as well as human microbiome project metagenomic samples. we observe significant enrichment for diseased gut samples in results of queries with another diseased sample and very high accuracy in discriminating between different body sites even though the method is unsupervised. a software implementation of the dsm framework is available at https://github.com/hiitmetagenomics/dsm-framework.
5
abstract. we study lattice embeddings for the class of countable groups γ defined by the property that the largest amenable uniformly recurrent subgroup aγ is continuous. when aγ comes from an extremely proximal action and the envelope of aγ is co-amenable in γ, we obtain restrictions on the locally compact groups g that contain a copy of γ as a lattice, notably regarding normal subgroups of g, product decompositions of g, and more generally dense mappings from g to a product of locally compact groups. we then focus on a family of finitely generated groups acting on trees within this class, and show that these embed as cocompact irreducible lattices in some locally compact wreath products. this provides examples of finitely generated simple groups quasi-isometric to a wreath product c ≀ f , where c is a finite group and f a non-abelian free group. keywords. lattices, locally compact groups, strongly proximal actions, chabauty space, groups acting on trees, irreducible lattices in wreath products.
4
abstract—in a typical multitarget tracking (mtt) scenario, the sensor state is either assumed known, or tracking is performed based on the sensor’s (relative) coordinate frame. this assumption becomes violated when the mtt sensor, such as a vehicular radar, is mounted on a vehicle, and the target state should be represented in a global (absolute) coordinate frame. then it is important to consider the uncertain sensor location for mtt. furthermore, in a multisensor scenario, where multiple sensors observe a common set of targets, state information from one sensor can be utilized to improve the state of another sensor. in this paper, we present a poisson multi-bernoulli mtt filter, which models the uncertain sensor state. the multisensor case is addressed in an asynchronous way, where measurements are incorporated sequentially based on the arrival of new sensor measurements. in doing so, targets observed from a well localized sensor reduce the state uncertainty at another poorly localized sensor, provided that a common non-empty subset of features is observed. the proposed mtt filter has low computational demands due to its parametric implementation. numerical results demonstrate the performance benefits of modeling the uncertain sensor state in feature tracking as well as the reduction of sensor state uncertainty in a multisensor scenario compared to a per sensor kalman filter. scalability results display the linear increase of computation time with number of sensors or features present.
3
abstract solving #sat problems is an important area of work. in this paper, we discuss implementing tetris, an algorithm originally designed for handling natural joins, as an exact model counter for the #sat problem. tetris uses a simple geometric framework, yet manages to achieve the fractional hypertree-width bound. its design allows it to handle complex problems involving extremely large numbers of clauses on which other state-of-the-art model counters do not perform well, yet still performs strongly on standard sat benchmarks. we have achieved the following objectives. first, we have found a natural set of model counting benchmarks on which tetris outperforms other model counters. second, we have constructed a data structure capable of efficiently handling and caching all of the data tetris needs to work on over the course of the algorithm. third, we have modified tetris in order to move from a theoretical, asymptotic-time-focused environment to one that performs well in practice. in particular, we have managed to produce results keeping us within a single order of magnitude as compared to other solvers on most benchmarks, and outperform those solvers by multiple orders of magnitude on others.
8
abstract we propose a new grayscale image denoiser, dubbed as neural affine image denoiser (neural aide), which utilizes neural network in a novel way. unlike other neural network based image denoising methods, which typically apply simple supervised learning to learn a mapping from a noisy patch to a clean patch, we formulate to train a neural network to learn an affine mapping that gets applied to a noisy pixel, based on its context. our formulation enables both supervised training of the network from the labeled training dataset and adaptive fine-tuning of the network parameters using the given noisy image subject to denoising. the key tool for devising neural aide is to devise an estimated loss function of the mse of the affine mapping, solely based on the noisy data. as a result, our algorithm can outperform most of the recent state-of-the-art methods in the standard benchmark datasets. moreover, our fine-tuning method can nicely overcome one of the drawbacks of the patch-level supervised learning methods in image denoising; namely, a supervised trained model with a mismatched noise variance can be mostly corrected as long as we have the matched noise variance during the fine-tuning step.
1
abstract— we explore the problem of classification within a medical image data-set based on a feature vector extracted from the deepest layer of pre-trained convolution neural networks. we have used feature vectors from several pre-trained structures, including networks with/without transfer learning to evaluate the performance of pre-trained deep features versus cnns which have been trained by that specific dataset as well as the impact of transfer learning with a small number of samples. all experiments are done on kimia path24 dataset which consists of 27,055 histopathology training patches in 24 tissue texture classes along with 1,325 test patches for evaluation. the result shows that pre-trained networks are quite competitive against training from scratch. as well, fine-tuning does not seem to add any tangible improvement for vgg16 to justify additional training while we observed considerable improvement in retrieval and classification accuracy when we fine-tuned the inception structure. keywords— image retrieval, medical imaging, deep learning, cnns, digital pathology, image classification, deep features, vgg, inception.
1
abstract. in algebra such as algebraic geometry, modular representation theory and commutative ring theory, we study algebraic objects through associated triangulated categories and topological spaces. in this paper, we consider the relationship between such triangulated categories and topological spaces. to be precise, we explore necessary conditions for derived equivalence of noetherian schemes, stable equivalence of finite groups, and singular equivalence of commutative noetherian rings by using associated topological spaces.
0
abstract the present paper considers a hybrid local search approach to the eternity ii puzzle and to unsigned, rectangular, edge matching puzzles in general. both an original mixed-integer linear programming (milp) formulation and a novel max-clique formulation are presented for this np-hard problem. although the presented formulations remain computationally intractable for medium and large sized instances, they can serve as the basis for developing heuristic decompositions and very large scale neighbourhoods. as a side product of the max-clique formulation, new hard-to-solve instances are published for the academic research community. two reasonably well performing milp-based constructive methods are presented and used for determining the initial solution of a multi-neighbourhood local search approach. experimental results confirm that this local search can further improve the results obtained by the constructive heuristics and is quite competitive with the state of the art procedures. keywords: edge matching puzzle, hybrid approach, local search 1. introduction the eternity ii puzzle (eii) is a commercial edge matching puzzle in which 256 square tiles with four coloured edges must be arranged on a 16 × 16 grid ∗
8
abstract structured prediction is concerned with predicting multiple inter-dependent labels simultaneously. classical methods like crf achieve this by maximizing a score function over the set of possible label assignments. recent extensions use neural networks to either implement the score function or in maximization. the current paper takes an alternative approach, using a neural network to generate the structured output directly, without going through a score function. we take an axiomatic perspective to derive the desired properties and invariances of a such network to certain input permutations, presenting a structural characterization that is provably both necessary and sufficient. we then discuss graph-permutation invariant (gpi) architectures that satisfy this characterization and explain how they can be used for deep structured prediction. we evaluate our approach on the challenging problem of inferring a scene graph from an image, namely, predicting entities and their relations in the image. we obtain state-of-the-art results on the challenging visual genome benchmark, outperforming all recent approaches.
1
abstract this paper provides conditions under which a non-stationary copula-based markov process is β-mixing. we introduce, as a particular case, a convolution-based gaussian markov process which generalizes the standard random walk allowing the increments to be dependent.
10
abstract as intelligent systems gain autonomy and capability, it becomes vital to ensure that their objectives match those of their human users; this is known as the value-alignment problem. in robotics, value alignment is key to the design of collaborative robots that can integrate into human workflows, successfully inferring and adapting to their users’ objectives as they go. we argue that a meaningful solution to value alignment must combine multi-agent decision theory with rich mathematical models of human cognition, enabling robots to tap into people’s natural collaborative capabilities. we present a solution to the cooperative inverse reinforcement learning (cirl) dynamic game based on well-established cognitive models of decision making and theory of mind. the solution captures a key reciprocity relation: the human will not plan her actions in isolation, but rather reason pedagogically about how the robot might learn from them; the robot, in turn, can anticipate this and interpret the human’s actions pragmatically. to our knowledge, this work constitutes the first formal analysis of value alignment grounded in empirically validated cognitive models. key words: value alignment, human-robot interaction, dynamic game theory
2
abstract—generative network models play an important role in algorithm development, scaling studies, network analysis, and realistic system benchmarks for graph data sets. the commonly used graph-based benchmark model r-mat has some drawbacks concerning realism and the scaling behavior of network properties. a complex network model gaining considerable popularity builds random hyperbolic graphs, generated by distributing points within a disk in the hyperbolic plane and then adding edges between points whose hyperbolic distance is below a threshold. we present in this paper a fast generation algorithm for such graphs. our experiments show that our new generator achieves speedup factors of 3-60 over the best previous implementation. one billion edges can now be generated in under one minute on a shared-memory workstation. furthermore, we present a dynamic extension to model gradual network change, while preserving at each step the point position probabilities.
8
abstract. in 1982, drezner proposed the (1|1)-centroid problem on the plane, in which two players, called the leader and the follower, open facilities to provide service to customers in a competitive manner. the leader opens the first facility, and the follower opens the second. each customer will patronize the facility closest to him (ties broken in favor of the first one), thereby decides the market share of the two facilities. the goal is to find the best position for the leader’s facility so that its market share is maximized. the best algorithm of this problem is an o(n2 log n)-time parametric search approach, which searches over the space of market share values. in the same paper, drezner also proposed a general version of (1|1)centroid problem by introducing a minimal distance constraint r, such that the follower’s facility is not allowed to be located within a distance r from the leader’s. he proposed an o(n5 log n)-time algorithm for this general version by identifying o(n4 ) points as the candidates of the optimal solution and checking the market share for each of them. in this paper, we develop a new parametric search approach searching over the o(n4 ) candidate points, and present an o(n2 log n)-time algorithm for the general version, thereby close the o(n3 ) gap between the two bounds. keywords: competitive facility, euclidean plane, parametric search
8
abstract a new design methodology is introduced, with some examples on building domain specific languages hierarchy on top of scheme.
2
abstract. in this paper we introduce associative commutative distributive term rewriting (acdtr), a rewriting language for rewriting logical formulae. acdtr extends ac term rewriting by adding distribution of conjunction over other operators. conjunction is vital for expressive term rewriting systems since it allows us to require that multiple conditions hold for a term rewriting rule to be used. acdtr uses the notion of a “conjunctive context”, which is the conjunction of constraints that must hold in the context of a term, to enable the programmer to write very expressive and targeted rewriting rules. acdtr can be seen as a general logic programming language that extends constraint handling rules and ac term rewriting. in this paper we define the semantics of acdtr and describe our prototype implementation.
6
abstract this paper addresses an open problem in traffic modeling: the second-order macroscopic node problem. a second-order macroscopic traffic model, in contrast to a first-order model, allows for variation of driving behavior across subpopulations of vehicles in the flow. the second-order models are thus more descriptive (e.g., they have been used to model variable mixtures of behaviorally-different traffic, like car/truck traffic, autonomous/humandriven traffic, etc.), but are much more complex. the second-order node problem is a particularly complex problem, as it requires the resolution of discontinuities in traffic density and mixture characteristics, and solving of throughflows for arbitrary numbers of input and output roads to a node (in other words, this is an arbitrarydimensional riemann problem with two conserved quantities). we propose a solution to this problem by making use of a recently-introduced dynamic system characterization of the first-order node model problem, which gives insight and intuition as to the continuous-time dynamics implicit in first-order node models. we use this intuition to extend the dynamic system node model to the second-order setting. we also extend the well-known “generic class of node model” constraints to the second order and present a simple solution algorithm to the second-order node problem. this node model has immediate applications in allowing modeling of behaviorally-complex traffic flows of contemporary interest (like partially-autonomous-vehicle flows) in arbitrary road networks.
3
abstract—in this paper, we tackle the direct and inverse problems for the remote-field eddy-current (rfec) technology. the direct problem is the sensor model, where given the geometry the measurements are obtained. conversely, the inverse problem is where the geometry needs to be estimated given the field measurements. these problems are particularly important in the field of non-destructive testing (ndt) because they allow assessing the quality of the structure monitored. we solve the direct problem in a parametric fashion using least absolute shrinkage and selection operation (lasso). the proposed inverse model uses the parameters from the direct model to recover the thickness using least squares producing the optimal solution given the direct model. this study is restricted to the 2d axisymmetric scenario. both, direct and inverse models, are validated using a finite element analysis (fea) environment with realistic pipe profiles.
3
abstract. this is a continuation of a previous paper by the same authors. in the former paper, it was proved that in order to obtain local uniformization for valuations centered on local domains, it is enough to prove it for rank one valuations. in this paper, we extend this result to the case of valuations centered on rings which are not necessarily integral domains and may even contain nilpotents.
0
abstract dropout is a popular technique for regularizing artificial neural networks. dropout networks are generally trained by minibatch gradient descent with a dropout mask turning off some of the units—a different pattern of dropout is applied to every sample in the minibatch. we explore a very simple alternative to the dropout mask. instead of masking dropped out units by setting them to zero, we perform matrix multiplication using a submatrix of the weight matrix—unneeded hidden units are never calculated. performing dropout batchwise, so that one pattern of dropout is used for each sample in a minibatch, we can substantially reduce training times. batchwise dropout can be used with fully-connected and convolutional neural networks.
9
abstract we consider the problem faced by a service platform that needs to match supply with demand, but also to learn attributes of new arrivals in order to match them better in the future. we introduce a benchmark model with heterogeneous workers and jobs that arrive over time. job types are known to the platform, but worker types are unknown and must be learned by observing match outcomes. workers depart after performing a certain number of jobs. the payoff from a match depends on the pair of types and the goal is to maximize the steady-state rate of accumulation of payoff. our main contribution is a complete characterization of the structure of the optimal policy in the limit that each worker performs many jobs. the platform faces a trade-off for each worker between myopically maximizing payoffs (exploitation) and learning the type of the worker (exploration). this creates a multitude of multi-armed bandit problems, one for each worker, coupled together by the constraint on availability of jobs of different types (capacity constraints). we find that the platform should estimate a shadow price for each job type, and use the payoffs adjusted by these prices, first, to determine its learning goals and then, for each worker, (i) to balance learning with payoffs during the “exploration phase”, and (ii) to myopically match after it has achieved its learning goals during the “exploitation phase." keywords: matching, learning, two-sided platform, multi-armed bandit, capacity constraints.
8
abstract. a novel matching based heuristic algorithm designed to detect specially formulated infeasible {0, 1} ips is presented. the algorithm’s input is a set of nested doubly stochastic subsystems and a set e of instance defining variables set at zero level. the algorithm deduces additional variables at zero level until either a constraint is violated (the ip is infeasible), or no more variables can be deduced zero (the ip is undecided). all feasible ips, and all infeasible ips not detected infeasible are undecided. we successfully apply the algorithm to a small set of specially formulated infeasible {0, 1} ip instances of the hamilton cycle decision problem. we show how to model both the graph and subgraph isomorphism decision problems for input to the algorithm. increased levels of nested doubly stochastic subsystems can be implemented dynamically. the algorithm is designed for parallel processing, and for inclusion of techniques in addition to matching.
8
abstract we propose thalnet, a deep learning model inspired by neocortical communication via the thalamus. our model consists of recurrent neural modules that send features through a routing center, endowing the modules with the flexibility to share features over multiple time steps. we show that our model learns to route information hierarchically, processing input data by a chain of modules. we observe common architectures, such as feed forward neural networks and skip connections, emerging as special cases of our architecture, while novel connectivity patterns are learned for the text8 compression task. our model outperforms standard recurrent neural networks on several sequential benchmarks.
2
abstractions for concurrent consensus⋆
6