citing_id
stringlengths
9
16
cited_id
stringlengths
9
16
section_title
stringlengths
0
2.25k
citation
stringlengths
52
442
text_before_citation
sequence
text_after_citation
sequence
keywords
sequence
citation_intent
stringclasses
3 values
citing_paper_content
dict
cited_paper_content
dict
1704.03295
1502.02445
I. INTRODUCTION
Unlike previous work that employs CNNs for a brain image segmentation task #REFR , the proposed method allows omitting the explicit definition of spatial features.
[ "Recent MICCAI challenges in neonatal #OTHEREFR and adult #OTHEREFR MR brain image segmentation show that various segmentation methods achieve accurate results, but also that different methods are better at different aspects of brain image segmentation.", "The best results per tissue type in both the NeoBrainS12 2 and the MRBrainS13 3 challenges were not obtained by a single best performing method.", "These challenges also show that, despite the overall accurate segmentations achieved by these automatic methods, various inaccuracies are still present.", "This paper presents a method for the automatic segmentation of anatomical MR brain images into a number of classes based on a multi-scale CNN.", "The multi-scale approach allows the method to obtain accurate segmentation details as well as spatial consistency." ]
[ "Furthermore, unlike previous work used for brain image segmentation #OTHEREFR , the method uses multiple patch and kernel sizes combined.", "This approach allows the method to learn multi-scale features that estimate both intensity and spatial characteristics.", "In contrast to using these multiple patch and kernel sizes, other approaches to multi-scale CNNs, used in different applications, provide multi-scale features by directly using the feature maps after the first convolution layer as additional input for a fully connected layer #OTHEREFR , #OTHEREFR .", "Additionally, unlike previous work on brain image segmentation, the method is applied to the segmentation of images of developing neonates at different ages as well as young adults and ageing adults, and to coronal as well as axial images.", "This allows demonstrating that the method is able to adapt to the segmentation task at hand based on training data." ]
[ "brain image segmentation" ]
method
{ "title": "Automatic segmentation of MR brain images with a convolutional neural network", "abstract": "Abstract-Automatic segmentation in MR brain images is important for quantitative analysis in large-scale studies with images acquired at all ages. This paper presents a method for the automatic segmentation of MR brain images into a number of tissue classes using a convolutional neural network. To ensure that the method obtains accurate segmentation details as well as spatial consistency, the network uses multiple patch sizes and multiple convolution kernel sizes to acquire multi-scale information about each voxel. The method is not dependent on explicit features, but learns to recognise the information that is important for the classification based on training data. The method requires a single anatomical MR image only. The segmentation method is applied to five different data sets: coronal T 2 -weighted images of preterm infants acquired at 30 weeks postmenstrual age (PMA) and 40 weeks PMA, axial T 2 -weighted images of preterm infants acquired at 40 weeks PMA, axial T 1 -weighted images of ageing adults acquired at an average age of 70 years, and T 1 -weighted images of young adults acquired at an average age of 23 years. The method obtained the following average Dice coefficients over all segmented tissue classes for each data set, respectively: 0.87, 0.82, 0.84, 0.86 and 0.91. The results demonstrate that the method obtains accurate segmentations in all five sets, and hence demonstrates its robustness to differences in age and acquisition protocol. Index Terms-Deep learning, convolutional neural networks, automatic image segmentation, preterm neonatal brain, adult brain, MRI." }
{ "title": "Deep neural networks for anatomical brain segmentation", "abstract": "We present a novel approach to automatically segment magnetic resonance (MR)" }
1903.12152
1502.02445
As a pioneer, de Brébisson #REFR proposed a unified CNN network to learn 2D and 3D patches as well as their spatial coordinates for whole brain segmentation.
[ "Recently, CNN methods have been widely developed to applied to whole brain segmentation.", "The straightforward strategy of performing whole brain segmentation is to fit all brain volume to a 3D CNN based segmentation network, like U-Net [42] or V-Net #OTHEREFR .", "Unfortunately, it is impractical to fit the clinical used high-resolution MRI (e.g., 1mm or even higher isotropic voxel size) to state-of-the-art 3D fully convolutional networks (FCN) due to the memory limitation of prevalent GPU.", "Another challenge of using CNN methods is that the manually traced whole brain MRI scans with detailed annotations (e.g., >100 labels) are rare commodities for any individual lab.", "To address the challenges of GPU memory restriction and limited training data, many previous efforts have been made." ]
[ "Then, such network has been extended to BrainSegNet #OTHEREFR , which employed 2.5D patches for training a CNN network.", "Recently, DeepNAT #OTHEREFR was proposed to perform hierarchical multi-task learning on 3D patches.", "These methods modeled the whole brain segmentation as a per-voxel segmentation problem.", "More recently, from another \"image-to-image\" perspective, the powerful fully convolution networks (FCN) have introduced to the whole brain segmentation.", "Roy et al., #OTHEREFR developed a 2D based method to train an FCN network using large-scale auxiliary labels on initially unlabeled data." ]
[ "whole brain segmentation" ]
background
{ "title": "3D Whole Brain Segmentation using Spatially Localized Atlas Network Tiles", "abstract": "Abstract-Detailed whole brain segmentation is an essential quantitative technique in medical image analysis, which provides a non-invasive way of measuring brain regions from a clinical acquired structural magnetic resonance imaging (MRI). Recently, deep convolution neural network (CNN) has been applied to whole brain segmentation. However, restricted by current GPU memory, 2D based methods, downsampling based 3D CNN methods, and patch-based high-resolution 3D CNN methods have been the de facto standard solutions. 3D patch-based high resolution methods typically yield superior performance among CNN approaches on detailed whole brain segmentation (>100 labels), however, whose performance are still commonly inferior compared with state-ofthe-art multi-atlas segmentation methods (MAS) due to the following challenges: (1) a single network is typically used to learn both spatial and contextual information for the patches, (2) limited manually traced whole brain volumes are available (typically less than 50) for training a network. In this work, we propose the spatially localized atlas network tiles (SLANT) method to distribute multiple independent 3D fully convolutional networks (FCN) for high-resolution whole brain segmentation. To address the first challenge, multiple spatially distributed networks were used in the SLANT method, in which each network learned contextual information for a fixed spatial location. To address the second challenge, auxiliary labels on 5111 initially unlabeled scans were created by multi-atlas segmentation for training. Since the method integrated multiple traditional medical image processing methods with deep learning, we developed a containerized pipeline to deploy the end-to-end solution. From the results, the proposed method achieved superior performance compared with multi-atlas segmentation methods, while reducing the computational time from >30 hours to 15 minutes. The method has been made available in (https://github.com/MASILab/SLANTbrainSeg)." }
{ "title": "Deep neural networks for anatomical brain segmentation", "abstract": "We present a novel approach to automatically segment magnetic resonance (MR)" }
1804.04563
1502.02445
RESULTS
In comparison, #REFR which is the only to our knowledge to have used a by-patch segmentation approach for the original 135 classes problem, proposed a model composed of 30M parameters and reached an average dice of 0.725.
[ "Figure 2 shows an example of segmentation maps we produced with the tested models.", "A real performance gap can be noticed between BaseNet(e) and BaseNet+DistBranch (d), where the first detects background between the left and right lateral ventricles and the second is able to recover smooth structures.", "In table 1, we can notice the impact of each branch in this incremental setup. Adding each of them successively brings better results.", "The 2D multi-resolution model (BaseNet) combined with the distance integration (DistBranch) shows a noticeable decrease in the average and standard deviation of the Hausdorff distance, thus reducing serious segmentation issues, with the help of better spatial constraints.", "The best model is finally a combination of all the proposed branches, leading to an average dice of 0.748." ]
[ "Our model has 10 order of magnitude less parameters, with a better average dice.", "With this model, we would have been ranked 5th of the multi-atlas segmentation challenge at MICCAI 2012, with a segmentation time per image of approximately 9 minutes.", "We briefly compare to a UNet #OTHEREFR like encoder-decoder architecture inspired from #OTHEREFR , with skip-connections and max unpooling.", "It was trained to segment slice by slice, optimized only with cross-entropy and dice loss, on the same dataset.", "It showed encouraging dice similarity, but poor Hausdorff performance, demonstrating that patch based segmentation is still a competitive task for brain segmentation." ]
[ "patch", "model" ]
method
{ "title": "Towards integrating spatial localization in convolutional neural networks for brain image segmentation", "abstract": "Semantic segmentation is an established while rapidly evolving field in medical imaging. In this paper we focus on the segmentation of brain Magnetic Resonance Images (MRI) into cerebral structures using convolutional neural networks (CNN). CNNs achieve good performance by finding effective high dimensional image features describing the patch content only. In this work, we propose different ways to introduce spatial constraints into the network to further reduce prediction inconsistencies. A patch based CNN architecture was trained, making use of multiple scales to gather contextual information. Spatial constraints were introduced within the CNN through a distance to landmarks feature or through the integration of a probability atlas. We demonstrate experimentally that using spatial information helps to reduce segmentation inconsistencies." }
{ "title": "Deep neural networks for anatomical brain segmentation", "abstract": "We present a novel approach to automatically segment magnetic resonance (MR)" }
1601.05875
1010.3613
I. INTRODUCTION
There is no corresponding upper bound to #REFR for continuous random variables, however, and it is unclear under what conditions G is finite.
[ "Wyner's common information between scalar jointly Gaussian random variables is computed in #OTHEREFR , and the result is extended to Gaussian vectors in #OTHEREFR , and to outputs of additive Gaussian channels in #OTHEREFR .", "We can also generalize the bounds in (2) to n random variables to obtain", "where I D is the dual total correlation #OTHEREFR -a generalization of mutual information defined as", "Details of the derivation of the lower bound in (5) can be found in Appendix A.", "Note that the lower bound on J continues to hold for continuous random variables after replacing the entropy H in the definition of I D with the differential entropy h." ]
[ "In this paper we devise a computationally efficient scheme for constructing a common randomness variable W for distributed simulation of n continuous random variables and establish upper bounds on its entropy, which in turn provide upper bounds on G.", "In particular we establish the following upper bound on G when the pdf of X n is log-concave", "For n = 2, this bound reduces to", "Applying this result to two jointly Gaussian random variables shows that only a finite amount of common randomness is needed for their distributed simulation.", "The above upper bound also provides an upper bound on Wyner's common information between n continuous random variables with log-concave pdf." ]
[ "continuous random variables" ]
background
{ "title": "Distributed Simulation of Continuous Random Variables", "abstract": "We establish the first known upper bound on the exact and Wyner's common information of n continuous random variables in terms of the dual total correlation between them (which is a generalization of mutual information). In particular, we show that when the pdf of the random variables is log-concave, there is a constant gap of n 2 log e + 9n log n between this upper bound and the dual total correlation lower bound that does not depend on the distribution. The upper bound is obtained using a computationally efficient dyadic decomposition scheme for constructing a discrete common randomness variable W from which the n random variables can be simulated in a distributed manner. We then bound the entropy of W using a new measure, which we refer to as the erosion entropy." }
{ "title": "The common information of N dependent random variables", "abstract": "Abstract-This paper generalizes Wyner's definition of common information of a pair of random variables to that of N random variables. We prove coding theorems that show the same operational meanings for the common information of two random variables generalize to that of N random variables. As a byproduct of our proof, we show that the Gray-Wyner source coding network can be generalized to N source squences with N decoders. We also establish a monotone property of Wyner's common information which is in contrast to other notions of the common information, specifically Shannon's mutual information and Gács and Körner's common randomness. Examples about the computation of Wyner's common information of N random variables are also given." }
1106.2050
1010.3613
IV. COMPARISON AND EXAMPLES In [1] Wyner defines the common information of two correlated random variables
Recently, this notion of common information was generalized to K correlated random variables in #REFR . The common information, B(X 1 , . . .
[ "One interpretation of this common information can be obtained from the Gray-Wyner source network.", "The common information B(X 1 , X 2 ) of two random variables is given as the smallest value of R 0 such that (R 1 , R 2 , R 0 ) ∈ R G−W and R 0 + R 1 + R 2 ≤ H(X 1 , X 2 )." ]
[ ", X K ), of K correlated random variables, as defined in #OTHEREFR , is given by smallest value of R 0 such that", "where the infimum is over all distributions p(w, x 1 , . . . , x K", "It was shown in #OTHEREFR that B(X 1 , . . . , X K ) is monotonically increasing in K.", "We believe that any intuitively satisfactory measure of common information should satisfy the property that the common information should decrease as the number of random variables increases.", "In Proposition 1, we showed that our measure of common information indeed satisfies this property." ]
[ "common information" ]
background
{ "title": "Multi-user privacy: The Gray-Wyner system and generalized common information", "abstract": "Abstract-The problem of preserving privacy when a multivariate source is required to be revealed partially to multiple users is modeled as a Gray-Wyner source coding problem with K correlated sources at the encoder and K decoders in which the k th decoder, k = 1, 2, ..., K, losslessly reconstructs the k th source via a common link of rate R0 and a private link of rate R k . The privacy requirement of keeping each decoder oblivious of all sources other than the one intended for it is introduced via an equivocation constraint E k at decoder k such that the total equivocation summed over all decoders E ≥ ∆. The set of achievable ({R k } K k=1 , R0, ∆) rates-equivocation (K + 2)-tuples is completely characterized. Using this characterization, two different definitions of common information are presented and are shown to be equivalent." }
{ "title": "The common information of N dependent random variables", "abstract": "Abstract-This paper generalizes Wyner's definition of common information of a pair of random variables to that of N random variables. We prove coding theorems that show the same operational meanings for the common information of two random variables generalize to that of N random variables. As a byproduct of our proof, we show that the Gray-Wyner source coding network can be generalized to N source squences with N decoders. We also establish a monotone property of Wyner's common information which is in contrast to other notions of the common information, specifically Shannon's mutual information and Gács and Körner's common randomness. Examples about the computation of Wyner's common information of N random variables are also given." }
1911.02404
1609.03773
Results on mouse dataset
Unlike human dataset, mouse dataset is more challenging due to its stochastic nature which causes difficulties to category its motion #REFR . Table 2 depicts the comparison results with MAE.
[]
[ "Our model outperformes other models on six out of eight frames.", "We also found that zero-velocity only surpasses others at the 80ms frame and falls behind with a notable margin on the remaining frames.", "This is because the movement of mouse is faster and more random than the human. As suggested in Fig 6," ]
[ "motion", "human dataset" ]
result
{ "title": "Predicting Long-Term Skeletal Motions by a Spatio-Temporal Hierarchical Recurrent Network", "abstract": "The primary goal of skeletal motion prediction is to generate future motion by observing a sequence of 3D skeletons. A key challenge in motion prediction is the fact that a motion can often be performed in several different ways, with each consisting of its own configuration of poses and their spatio-temporal dependencies, and as a result, the predicted poses often converge to the motionless poses or non-human like motions in long-term prediction. This leads us to define a hierarchical recurrent network model that explicitly characterizes these internal configurations of poses and their local and global spatio-temporal dependencies. The model introduces a latent vector variable from the Lie algebra to represent spatial and temporal relations simultaneously. Furthermore, a structured stack LSTMbased decoder is devised to decode the predicted poses with a new loss function defined to estimate the quantized weight of each body part in a pose. Empirical evaluations on benchmark datasets suggest our approach significantly outperforms the state-of-the-art methods on both short-term and long-term motion prediction." }
{ "title": "Lie-X: Depth Image Based Articulated Object Pose Estimation, Tracking, and Action Recognition on Lie Groups", "abstract": "Pose estimation, tracking, and action recognition of articulated objects from depth images are important and challenging problems, which are normally considered separately. In this paper, a unified paradigm based on Lie group theory is proposed, which enables us to collectively address these related problems. Our approach is also applicable to a wide range of articulated objects. Empirically it is evaluated on lab animals including mouse and fish, as well as on human hand. On these applications, it is shown to deliver competitive results compared to the state-of-the-arts, and non-trivial baselines including convolutional neural networks and regression forest methods. Moreover, new sets of annotated depth data of articulated objects are created which, together with our code, are made publicly available." }
1912.13436
1906.09792
I. INTRODUCTION
This variant of SABM, named SABM-SR, was shown to outperform iBDD and SABM by up to 0.8 dB and 0.3 dB, respectively, with only minor additional complexity #REFR .
[ "Therefore, solutions which trade-off performance for a lower decoding complexity are becoming increasingly attractive #OTHEREFR .", "Along the path traced by Chase in 1972 #OTHEREFR , hybrid hard-decision (HD)/SD decoders have been recently reproposed in optical communications as a low-complexity alternative to fully-fledged SD-FEC G. Liga schemes #OTHEREFR - #OTHEREFR .", "In these schemes, reliability metrics are used to assist a standard HD decoder to improve its performance, whilst keeping the complexity of the overall decoder of the same order as that of algebraic HD decoding.", "These new decoding algorithms have been applied to both product codes (PCs) and staircase codes, showing substantial coding gains (0.2-0.8 dB) compared to its traditional HD counterpart, referred to as iterative bounded distance decoder (iBDD) [8, Sec. II-A].", "One such decoding algorithm is the soft-aided bit marking (SABM) algorithm which was introduced in #OTHEREFR and later extended in #OTHEREFR to incorporate so-called scaled reliabilities (SRs), defined in #OTHEREFR , in the decoding process." ]
[ "In combination with FEC, constellation shaping has been demonstrated to be a viable solution for providing additional signal-to-noise ratio (SNR) gains at a given spectral efficiency (SE).", "In particular, geometrical shaping can be easily coupled with FEC and only requires straightforward modifications of the mapper and demapper.", "Recently, the four-dimensional 64-ary polarization-ring-switching (4D-64PRS) format, introduced in #OTHEREFR , was demonstrated to outperform other notable 4D modulation formats (see e.g., #OTHEREFR ) at a nominal SE of 6 bit/4D-sym #OTHEREFR , #OTHEREFR , thus representing a viable solution for long-reach 400G (dual-carrier) transponders.", "In this work, we combine the low-complexity SABM-SR decoder and a PC-coded nonlinearity-tailored 4D-64PRS modulation format, enabling transmission of 11×218 Gbit/s over transatlantic distances (≥ 5,000 km) at 5.2 bit/symbol.", "Moreover, we demonstrate a total 30% reach increase over polarization multiplexed 8-quadrature amplitude modulation (PM-8QAM) and iBDD decoding." ]
[ "iBDD" ]
method
{ "title": "30% Reach Increase via Low-complexity Hybrid HD/SD FEC and Nonlinearity-tolerant 4D Modulation", "abstract": "Current optical coherent transponders technology is driving data rates towards 1 Tb/s/λ and beyond. This trend requires both high-performance coded modulation schemes and efficient implementation of the forward-error-correction (FEC) decoder. A possible solution to this problem is combining advanced multidimensional modulation formats with low-complexity hybrid HD/SD FEC decoders. Following this rationale, in this paper we combine two recently introduced coded modulation techniques: the geometrically-shaped 4D-64 polarization ring-switched and the soft-aided bit-marking-scaled reliability decoder. This joint scheme enabled us to experimentally demonstrate the transmission of 11×218 Gbit/s channels over transatlantic distances at 5.2 bit/4D-sym. Furthermore, a 30% reach increase is demonstrated over PM-8QAM and conventional HD-FEC decoding for product codes." }
{ "title": "A novel soft-aided bit-marking decoder for product codes", "abstract": "We introduce a novel soft-aided hard-decision decoder for product codes adopting bit marking via updated reliabilities at each decoding iteration. Gains up to 0.8 dB vs. standard iterative bounded distance decoding and up to 0.3 dB vs. our previously proposed bit-marking decoder are demonstrated." }
1901.06796
1707.05970
FSGM-based
The authors in #REFR developed a new surrogate loss function, based on FGSM to find adversarial examples in deep malware detection models.
[ "The work #OTHEREFR represents an executable by binary vector {x 1 , ..., x m }, x i ∈ {0, 1} and m is the number of features, that using 1 and 0 to indicate the feature is present or not.", "The authors investigated four method to generate binaryencoded adversarial examples.", "The first two methods adopt FSGM method, but restricted in a binary domain by introducing deterministic rounding (dFGSM) and randomized rounding (rFGSM).", "The third method multi-step Bit Gradient Ascent (BGA K ) sets the bit of the j-th feature if the corresponding partial deivative of the loss is greater than or equal to the loss gradient's l 2 -norm divided by √ m.", "The fourth method multi-step Bit Coordinate Ascent (BCA k ) updates one bit in each step by considering the feature with the maximum corresponding partial derivative of the loss." ]
[ "They injected a sequence of bytes (payload) to the binary files to preserve the original functionality of the malware. Finally they reconstructed adverse embedding to valid binary file." ]
[ "adversarial examples", "deep malware detection" ]
method
{ "title": "Generating Textual Adversarial Examples for Deep Learning Models: A Survey", "abstract": "With the development of high computational devices, deep neural networks (DNNs), in recent years, have gained significant popularity in many Artificial Intelligence (AI) applications. However, previous efforts have shown that DNNs were vulnerable to strategically modified samples, named adversarial examples. These samples are generated with some imperceptible perturbations, but can fool the DNNs to give false predictions. Inspired by the popularity of generating adversarial examples for image DNNs, research efforts on attacking DNNs for textual applications emerges in recent years. However, existing perturbation methods for images cannot be directly applied to texts as text data is discrete. In this article, we review research works that address this difference and generate textual adversarial examples on DNNs. We collect, select, summarize, discuss and analyze these works in a comprehensive way and cover all the related information to make the article self-contained. Finally, drawing on the reviewed literature, we provide further discussions and suggestions on this topic." }
{ "title": "Generic Black-Box End-to-End Attack against RNNs and Other API Calls Based Malware Classifiers", "abstract": "Deep neural networks (DNNs) are used to solve complex classification problems, for which other machine learning classifiers, such as SVM, fall short. Recurrent neural networks (RNNs) have been used for tasks that involves sequential inputs, such as speech to text. In the cyber security domain, RNNs based on API calls have been used effectively to classify previously un-encountered malware. In this paper, we present a blackbox attack against RNNs, focusing on finding adversarial API call sequences that would be misclassified by a RNN without affecting the malware functionality. We also show that this attack is effective against many classifiers, due-to the transferability principle between RNN variants, feed-forward DNNs and traditional machine learning classifiers such as SVM. Finally, we implemented GADGET, a software framework to convert any malware binary to a binary undetected by an API calls based malware classifier, using the proposed attack, without access to the malware source code. We conclude by discussing possible defense mechanisms and countermeasures against the attack." }
1804.09081
1708.05552
INTRODUCTION
We evaluate LEMONADE on two different search spaces for image classification: (i) non-modularized architectures and (ii) cells that are used as repeatable building blocks within an architecture #REFR and also allow transfer to other datasets.
[ "In contrast to generic multi-objective algorithms, LEMONADE exploits that evaluating certain objectives (such as an architecture's number of parameters) is cheap while evaluating the predictive performance on validation data is expensive (since it requires training the model first).", "Thus, LEMONADE handles its various objectives differently: it first selects a subset of architectures, assigning higher probability to architectures that would fill gaps on the Pareto front for the \"cheap\" objectives; then, it trains and evaluates only this subset, further reducing the computational resource requirements during architecture search.", "In contrast to other multi-objective architecture search methods, LEMONADE (i) does not require to define a trade-off between performance and other objectives a-priori (e.g., by weighting objectives when using scalarization methods) but rather returns a set of architectures, which allows the user to select a suitable model a-posteriori;", "(ii) LEMONADE does not require to be initialized with well performing architectures; it can be initialized with trivial architectures and hence requires less prior knowledge.", "Also, LEMONADE can handle various search spaces, including complex topologies with multiple branches and skip connections." ]
[ "In both cases, LEMONADE returns a population of CNNs covering architectures with 10 000 to 10 000 000 parameters.", "Within only one week on eight GPUs, LEMONADE discovers architectures that are competitive in terms of predictive performance and resource consumption with hand-designed networks, such as MobileNet V1, V2 #OTHEREFR , as well as architectures that were automatically designed using 40x greater resources and other multi-objective methods #OTHEREFR ." ]
[ "image classification" ]
method
{ "title": "Efficient Multi-objective Neural Architecture Search via Lamarckian Evolution", "abstract": "Neural Architecture Search aims at automatically finding neural architectures that are competitive with architectures designed by human experts. While recent approaches have achieved state-of-the-art predictive performance for image recognition, they are problematic under resource constraints for two reasons: (1) the neural architectures found are solely optimized for high predictive performance, without penalizing excessive resource consumption; (2) most architecture search methods require vast computational resources. We address the first shortcoming by proposing LEMONADE, an evolutionary algorithm for multi-objective architecture search that allows approximating the entire Pareto-front of architectures under multiple objectives, such as predictive performance and number of parameters, in a single run of the method. We address the second shortcoming by proposing a Lamarckian inheritance mechanism for LEMONADE which generates children networks that are warmstarted with the predictive performance of their trained parents. This is accomplished by using (approximate) network morphism operators for generating children. The combination of these two contributions allows finding models that are on par or even outperform both hand-crafted as well as automatically-designed networks." }
{ "title": "Practical Network Blocks Design with Q-Learning", "abstract": "Convolutional neural network provides an end-to-end solution to train many computer vision tasks and has gained great successes. However, the design of network architectures usually relies heavily on expert knowledge and is hand-crafted. In this paper, we provide a solution to automatically and efficiently design high performance network architectures. To reduce the search space of network design, we focus on constructing network blocks, which can be stacked to generate the whole network. Blocks are generated through an agent, which is trained with Q-learning to maximize the expected accuracy of the searching blocks on the learning task. Distributed asynchronous framework and early stop strategy are used to accelerate the training process. Our experimental results demonstrate that the network architectures designed by our approach perform competitively compared with handcrafted state-of-the-art networks. We trained the Q-learning on CIFAR-100, and evaluated on CIFAR10 and ImageNet, the designed block structure achieved 3.60% error on CIFAR-10 and competitive result on ImageNet. The Q-learning process can be efficiently trained only on 32 GPUs in 3 days." }
1806.10982
1708.09533
Network architecture
The generator has one additional enhancement in relation to the default architecture: according to the results in #REFR we use one more upscaling step with the next average pooling operator to force the network taking into account the opinions of neighbors pixels too.
[ "In our case not only the discriminator should satisfy the specified above conditions but the encoder and the attribute classifier too.", "As a result, we are limited in choice of the state of the art architectures to carry out the feature extraction with better quality.", "But as it shows in practice the fundamental problems like GANs convergence and their diversity are more influential than the attendant problems.", "Finally, the architectures of the encoder and generator implemented in this paper can be found in Tables B.1 and B. 2.", "Note, the encoder has a normalizer at the end to carry out a normalization of each pair of values according to Eq. 4 limitations." ]
[]
[ "generator" ]
method
{ "title": "High Diversity Attribute Guided Face Generation with GANs", "abstract": "Abstract. In this work we focused on GAN-based solution for the attribute guided face synthesis. Previous works exploited GANs for generation of photo-realistic face images and did not pay attention to the question of diversity of the resulting images. The proposed solution in its turn introducing novel latent space of unit complex numbers is able to provide the diversity on the \"birthday paradox\" score 3 times higher than the size of the training dataset. It is important to emphasize that our result is shown on relatively small dataset (20k samples vs 200k) while preserving photo-realistic properties of generated faces on significantly higher resolution (128x128 in comparison to 32x32 of previous works)." }
{ "title": "Learning a Generative Adversarial Network for High Resolution Artwork Synthesis", "abstract": "Artwork is a mode of creative expression and this paper is particularly interested in investigating if machine can learn and synthetically create artwork that are usually nonfigurative and structured abstract. To this end, we propose an extension to the Generative Adversarial Network (GAN), namely as the ArtGAN to synthetically generate high quality artwork. This is in contrast to most of the current solutions that focused on generating structural images such as birds, flowers and faces. The key innovation of our work is to allow back-propagation of the loss function w.r.t. the labels (randomly assigned to each generated images) to the generator from the categorical autoencoder-based discriminator that incorporates an autoencoder into the categorical discriminator for additional complementary information. In order to synthesize a high resolution artwork, we include a novel magnified learning strategy to improve the correlations between neighbouring pixels. Based on visual inspection and Inception scores, we demonstrate that ArtGAN is able to draw high resolution and realistic artwork, as well as generate images of much higher quality in four other datasets (i.e. CIFAR-10, STL-10, Oxford-102 and CUB-200). 1 We name the network as ArtGAN since the nature of this work is to synthetically generate artwork. 2" }
1901.08787
1502.03532
NLPR_MCT dataset
Each sub-dataset includes 3-5 cameras with non-overlapping scenes and recordes different situations according to the number of people (ranging from 14 to 255) and the level of illumination changes and occlusions #REFR . The videos contain both real scenes and simulated environments.
[ "The NLPR_MCT dataset consists of four sub-datasets. A sub-dataset is depicted in Figure 6 † ." ]
[ "Each video was nearly 20 minutes long (except Dataset 3), with a rate of 25 fps.", "In this dataset, the topological connection information for every pair of entry/exit points for each sub-dataset is provided.", "We split the π i of an observation o i into π dataset did not provide separate training and test datasets, we learned the parameters for our method as well as the transition matrix, the mean and standard deviation of transition time for each possible transition pair of entry/exit points using first 70 percent of each dataset.", "The evaluation criteria used for the NLPR_MCT dataset was MCTA #OTHEREFR , multi-camera object tracking accuracy.", "It was modified based on CLEAR MOT #OTHEREFR and can be applied to MCT." ]
[ "3-5 cameras" ]
background
{ "title": "Multiple Hypothesis Tracking Algorithm for Multi-Target Multi-Camera Tracking with Disjoint Views", "abstract": "In this study, a multiple hypothesis tracking (MHT) algorithm for multi-target multi-camera tracking (MCT) with disjoint views is proposed. The authors' method forms track-hypothesis trees, and each branch of them represents a multi-camera track of a target that may move within a camera as well as move across cameras. Furthermore, multi-target tracking within a camera is performed simultaneously with the tree formation by manipulating a status of each track hypothesis. Each status represents three different stages of a multi-camera track: tracking, searching, and end-of-track. The tracking status means targets are tracked by a single camera tracker. In the searching status, the disappeared targets are examined if they reappear in other cameras. The end-of-track status does the target exited the camera network due to its lengthy invisibility. These three status assists MHT to form the track-hypothesis trees for multi-camera tracking. Furthermore, they present a gating technique for eliminating of unlikely observation-to-track association. In the experiments, they evaluate the proposed method using two datasets, DukeMTMC and NLPR_MCT, which demonstrates that the proposed method outperforms the state-of-the-art method in terms of improvement of the accuracy. In addition, they show that the proposed method can operate in real-time and online. A large number of cameras recently have been deployed to cover wide area. Besides, tracking multiple targets in a camera network becomes an important and challenging problem in visual surveillance systems since in-person monitoring wide area is costly and needs a lot of effort. Hence, it is desirable to develop multi-target multi-camera tracking (MTMCT) algorithm. In this paper, our goal is to develop an algorithm that can track multiple targets (especially for pedestrians in this work) in a camera network. The targets may move within a camera or move to another camera and the coverage of each camera does not overlap. To achieve this goal, we need to solve both single camera tracking (SCT) and multi-camera tracking (MCT). There has been great amount of effort made to SCT whereas relatively smaller amount of effort has been done for MCT with disjoint views. Moreover, most MCT approaches [1] [2] [3] only focus on tracking targets across cameras by assuming solved SCT in advance; thus, jointly tracking multiple targets in both within and across cameras still remains to be explored much further [4] . The proposed MHT algorithm tracks targets across cameras by maintaining the identities of observations which are obtained by solving SCT that tracks targets in within-camera. Thus, our method jointly tracks targets in both within and across cameras. In this work, we adopt the real-time and online method[5] to produce observations by tracking multiple targets in within-camera. These observations obtained from each camera are fed into the proposed MHT algorithm which solves MCT problem. The proposed MHT algorithm forms track-hypothesis trees with obtained observations either by adding a child node to hypothesis tree, which describes the association between an observation and an existing track hypothesis, or by creating a new tree with one root node indicating an observation, which describes the initiation of a new multi-camera track. Each branch in track-hypothesis trees represents different across camera data association result (i.e., a multi-camera track). To work in concert with SCT, every node in track-hypothesis trees designates certain observation and all leaf nodes have a status. There are three statuses for the proposed MHT and each of which represents a different stage of a multi-camera track, tracking, searching, and end-of-track. With the status, the MHT can form the track-hypothesis trees while simultaneously solving SCT to produce observations. Then it selects the best set of track hypotheses as the multi-camera tracks from the track-hypothesis trees. Furthermore, we propose gating mechanism to eliminate unlikely observation-to-track pairing; this also prevents track-hypothesis trees from unnecessary growth. We propose two gating mechanisms, speed gating and temporal gating in order to deal different tracking scenarios (tracking targets on the ground plane or image plane). For the appearance feature of an observation, we used simple averaged color histogram as an appearance model after Convolutional Pose Machine[6] is applied to an image patch of a person in order to capture the pose variation. The experimental results shows that our method achieves state-of-the-art performance on DukeMTMC dataset and performs comparable to the state-of-theart method on NLPR_MCT dataset. Furthermore, we demonstrate that proposed method is able to operate in real-time with real-time SCT in Section 4.5. The remainder of this paper organized as follows. In Section 2, we review relevant previous works. The detailed explanation of proposed method is given in Section 3. Section 3.1 describes how the proposed MHT forms track-hypothesis trees while it simultaneously works with SCT. The proposed gating mechanism is explained in Section 3.2. In Section 4, we report experiment results with conducted on DukeMTMC and NLPR_MCT datasets. Finally, we conclude the paper in Section 5. Single camera tracking (SCT), which tracks multiple targets in a single scene, is also called multi-object tracking (MOT). Many approaches have been proposed to improve the MOT. Track-bydetection, which optimizes a global objective function over many frames have emerged as a powerful MOT algorithm in recent years [7] . Network flow-based methods are successful approaches in track-by-detection techniques [8] [9] [10] . These methods efficiently optimize their objective function using the push-relabel method [9] pp. 1-10 1" }
{ "title": "An equalised global graphical model-based approach for multi-camera object tracking", "abstract": "Multi-camera non-overlapping visual object tracking system typically consists of two tasks: single camera object tracking and inter-camera object tracking. Since the state-of-theart approaches are yet not perform perfectly in real scenes, the errors in single camera object tracking module would propagate into the module of inter-camera object tracking, resulting much lower overall performance. In order to address this problem, we develop an approach that jointly optimise the single camera object tracking and inter-camera object tracking in an equalised global graphical model. Such an approach has the advantage of guaranteeing a good overall tracking performance even when there are limited amount of false tracking in single camera object tracking. Besides, the similarity metrics used in our approach improve the compatibility of the metrics used in the two different tasks. Results show that our approach achieve the state-of-the-art results in multi-camera non-overlapping tracking datasets." }
1705.02573
1509.00600
C. Regrasping
The set G ∩ P can, in fact, be grouped into a finite number of subsets, called grasp classes and placement classes #REFR .
[ "Pioneering works on regrasping problems, including #OTHEREFR - #OTHEREFR , characterized the set G ∩ P by means of discretization.", "Their methods are therefore limited to objects with low geometric complexity. However, Tournassoud et al.", "#OTHEREFR also proposed an interesting notion of Grasp-Placement Table, based on the discretization of G ∩ P, which captured the connectivity of G ∩P.", "Recent works on regrasping, such as #OTHEREFR , #OTHEREFR , and #OTHEREFR , also employed some kinds of graphs to represent the connectivity.", "Another line of works on regrasping focuses on improving implementations/execution of existing unimanual regrasping algorithms #OTHEREFR , #OTHEREFR ." ]
[ "Utilizing this fact, Lertkultanon and Pham #OTHEREFR introduced a high-level Grasp-Placement Graph, which showed potential connectivity between different connected components of G ∩ P.", "They proposed a manipulation planner that, with the guidance from the graph, explored the configuration space efficiently and systematically.", "One possible way to solve a bimanual manipulation planning problem is then to extend the high-level Grasp-Placement Graph #OTHEREFR , originally proposed for unimanual systems, to bimanual cases.", "However, the combinatorial complexity associated with grasp classes grows much too high, making this approach not suitable even in the case when the object has a moderate number of grasp classes. For example, consider a unimanual setting.", "Suppose that the start and goal placements have m grasps in common but no transfer path directly connecting the two placement classes exists." ]
[ "grasp classes" ]
background
{ "title": "A Certified-Complete Bimanual Manipulation Planner", "abstract": "Planning motions for two robot arms to move an object collaboratively is a difficult problem, mainly because of the closed-chain constraint, which arises whenever two robot hands simultaneously grasp a single rigid object. In this paper, we propose a manipulation planning algorithm to bring an object from an initial stable placement (position and orientation of the object on a support surface) toward a goal stable placement. The key specificity of our algorithm is that it is certified-complete: for a given object and a given environment, we provide a certificate that the algorithm will find a solution to any bimanual manipulation query in that environment whenever one exists. Moreover, the certificate is constructive: at run-time, it can be used to quickly find a solution to a given query. The algorithm is tested in software and hardware on a number of large pieces of furniture. by two robots. This problem arises naturally when manipulating a large and/or heavy object such as a piece of furniture and is therefore essential to industrial automation. The algorithm first precomputes a certificate, a set of robot motions to move the object between different placement classes that helps guarantee that the algorithm will find a solution to any planning query whenever one exists. This certificate is then used to quickly construct a solution trajectory to a planning query and can be reused under the same environment. The algorithm has been empirically verified through software and hardware experiments on a number of large pieces of furniture. An open-source implementation is provided." }
{ "title": "A Single-Query Manipulation Planner", "abstract": "Abstract-In manipulation tasks, a robot interacts with movable object(s). The configuration space in manipulation planning is thus the Cartesian product of the configuration space of the robot with those of the movable objects. It is the complex structure of such a \"composite configuration space\" that makes manipulation planning particularly challenging. Previous works approximate the connectivity of the composite configuration space by means of discretization or by creating random roadmaps. Such approaches involve an extensive preprocessing phase, which furthermore has to be redone each time the environment changes. In this letter, we propose a high-level Grasp-Placement Table similar to that proposed by Tournassoud et al. (1987), but which does not require any discretization or heavy pre-processing. The table captures the potential connectivity of the composite configuration space while being specific only to the movable objects: in particular, it does not require to be recomputed when the environment changes. During the query phase, the table is used to guide a tree-based planner that explores the space systematically. Our simulations and experiments show that the proposed method enables improvements in both running time and trajectory quality as compared to existing approaches." }
1705.02573
1509.00600
A. Background
The set of all valid grasps can be parameterized by a set of parameters #REFR , which is finite but not necessarily unique.
[ "One can represent a grasp by, e.g., a pair of relative transformations between each robot gripper and the object.", "Note that from the definition, any pair of relative transformations can be a grasp.", "However, the object can be moved only when being grasped by a valid grasp.", "The set of all valid grasps are to be determined by the users, either explicitly (e.g., as a set of grasps) or implicitly (e.g., as conditions to be satisfied by the grippers).", "Note also that there can be many pair of robot configurations (q 1 , q 2 ) corresponding to exactly the same grasp due to multiplicity of inverse kinematic (IK) solutions associated with the same grippers' poses." ]
[ "Consider for example an object composed entirely of boxes 3 and a gripper shown in Fig. 2 . Grasp parameters may be defined as follows #OTHEREFR .", "l is an integer indicating the index of the link (box) that the gripper is grasping.", "a is an integer indicating how the gripper is approaching the object.", "Assuming, without loss of generality, that each box is aligned with its local coordinate frame.", "The integer a may be a number from 1 to 6, where if a = 1, the gripper's approaching direction is aligned with the +x-axis of the box's local frame; if a = 2, the gripper's approaching direction is aligned with the +y-axis, and so on." ]
[ "valid grasps" ]
background
{ "title": "A Certified-Complete Bimanual Manipulation Planner", "abstract": "Planning motions for two robot arms to move an object collaboratively is a difficult problem, mainly because of the closed-chain constraint, which arises whenever two robot hands simultaneously grasp a single rigid object. In this paper, we propose a manipulation planning algorithm to bring an object from an initial stable placement (position and orientation of the object on a support surface) toward a goal stable placement. The key specificity of our algorithm is that it is certified-complete: for a given object and a given environment, we provide a certificate that the algorithm will find a solution to any bimanual manipulation query in that environment whenever one exists. Moreover, the certificate is constructive: at run-time, it can be used to quickly find a solution to a given query. The algorithm is tested in software and hardware on a number of large pieces of furniture. by two robots. This problem arises naturally when manipulating a large and/or heavy object such as a piece of furniture and is therefore essential to industrial automation. The algorithm first precomputes a certificate, a set of robot motions to move the object between different placement classes that helps guarantee that the algorithm will find a solution to any planning query whenever one exists. This certificate is then used to quickly construct a solution trajectory to a planning query and can be reused under the same environment. The algorithm has been empirically verified through software and hardware experiments on a number of large pieces of furniture. An open-source implementation is provided." }
{ "title": "A Single-Query Manipulation Planner", "abstract": "Abstract-In manipulation tasks, a robot interacts with movable object(s). The configuration space in manipulation planning is thus the Cartesian product of the configuration space of the robot with those of the movable objects. It is the complex structure of such a \"composite configuration space\" that makes manipulation planning particularly challenging. Previous works approximate the connectivity of the composite configuration space by means of discretization or by creating random roadmaps. Such approaches involve an extensive preprocessing phase, which furthermore has to be redone each time the environment changes. In this letter, we propose a high-level Grasp-Placement Table similar to that proposed by Tournassoud et al. (1987), but which does not require any discretization or heavy pre-processing. The table captures the potential connectivity of the composite configuration space while being specific only to the movable objects: in particular, it does not require to be recomputed when the environment changes. During the query phase, the table is used to guide a tree-based planner that explores the space systematically. Our simulations and experiments show that the proposed method enables improvements in both running time and trajectory quality as compared to existing approaches." }
1705.02573
1509.00600
C. Regrasping
The set G ∩ P can, in fact, be grouped into a finite number of subsets, called grasp classes and placement classes #REFR .
[ "The set of configurations satisfying the aforementioned criteria, denoted as G ∩P, and connectivity between its different connected component play significant roles in solving regrasping problems.", "Pioneering works on regrasping problems, including #OTHEREFR , #OTHEREFR , #OTHEREFR , characterized the set G ∩P by means of discretization.", "Their methods are therefore limited in a number of ways.", "However, the authors of #OTHEREFR also proposed an interesting notion of Grasp-Placement Table, based on the discretization of G ∩ P, which captured the connectivity of G ∩ P.", "More recent work on regrasping such as #OTHEREFR , #OTHEREFR also employed some kinds of graphs to represent the connectivity." ]
[ "Utilizing these facts, the authors of #OTHEREFR introduced a high-level Grasp-Placement Graph which showed potential connectivity between different connected components of G ∩ P.", "They proposed a manipulation planner which, with the guidance from the graph, explored the configuration space efficiently and systematically.", "One possible way to solve a bimanual manipulation planning problem is then to extend the high-level Grasp-Placement Graph, originally proposed for unimanual systems, to bimanual cases.", "However, the combinatorial complexity grows much too high, making this approach not suitable even in the case when the object has a moderate number of grasp classes. For example, consider a unimanual setting.", "Suppose the start and goal placements have m grasps in common but no transfer path directly connecting the two placement classes exists." ]
[ "grasp classes" ]
background
{ "title": "A Certified-Complete Bimanual Manipulation Planner Puttichai Lertkultanon", "abstract": "Planning motions for two robot arms to move an object collaboratively is a difficult problem, mainly because of the closed-chain constraint, which arises whenever two robot hands simultaneously grasp a single rigid object. In this paper, we propose a manipulation planning algorithm to bring an object from an initial stable placement (position and orientation of the object on the support surface) towards a goal stable placement. The key specificity of our algorithm is that it is certified-complete: for a given object and a given environment, we provide a certificate that the algorithm will find a solution to any bimanual manipulation query in that environment whenever one exists. Moreover, the certificate is constructive: at run-time, it can be used to quickly find a solution to a given query. The algorithm is tested in software and hardware on a number of large pieces of furniture. Note to Practitioners-This paper presents an algorithm to solve a difficult class of bimanual manipulation planning problems where a movable object can be moved only when grasped by two robots. These problems arise naturally when manipulating a large and/or heavy object such as a piece of furniture. With a given object and environment, we provide a method to compute a certificate that the algorithm will find a solution to any bimanual manipulation query in that environment whenever one exists. The certificate can also be used to quickly construct a solution to a given query. The algorithm is tested in software and hardware on a number of large pieces of furniture." }
{ "title": "A Single-Query Manipulation Planner", "abstract": "Abstract-In manipulation tasks, a robot interacts with movable object(s). The configuration space in manipulation planning is thus the Cartesian product of the configuration space of the robot with those of the movable objects. It is the complex structure of such a \"composite configuration space\" that makes manipulation planning particularly challenging. Previous works approximate the connectivity of the composite configuration space by means of discretization or by creating random roadmaps. Such approaches involve an extensive preprocessing phase, which furthermore has to be redone each time the environment changes. In this letter, we propose a high-level Grasp-Placement Table similar to that proposed by Tournassoud et al. (1987), but which does not require any discretization or heavy pre-processing. The table captures the potential connectivity of the composite configuration space while being specific only to the movable objects: in particular, it does not require to be recomputed when the environment changes. During the query phase, the table is used to guide a tree-based planner that explores the space systematically. Our simulations and experiments show that the proposed method enables improvements in both running time and trajectory quality as compared to existing approaches." }
1705.02573
1509.00600
Definition 5.
Both G and P can be partitioned into a finite number of grasp classes and placement classes, respectively #REFR .
[ "For convenience, we define a function π p : C → C O which projects a composite configuration c = (q 1 , q 2 , T ) into SE(3) such that π p (c) = T .", "There are two types of subsets of C induced by valid grasps and stable placements. Definition 6.", "Grasp configuration set, G , is the set of feasible composite configurations where the robots are grasping the object with a valid grasp. Definition 7.", "Placement configuration set, P, is the set of feasible composite configurations such that 1) ∀c ∈ P π p (c) is a stable placement and 2) ∀c ∈ P ∃c ∈ G π p (c ) = π p (c).", "The second requirement of the placement configuration set is to ensure that for any placement configuration c ∈ P, its corresponding placement is always reachable by some grasp." ]
[ "From the grasp parameters we introduced earlier, we define a grasp class as a subset of G whose configurations have the same grasp parameters l (link index) and a (approaching direction).", "For example, if the object is a box, there will be 6 grasp classes in total. Now consider partitioning of P. Let H be the convex hull of the object.", "All stable placements can be grouped based on which surface of H is in contact with the support surface.", "Therefore, a placement class is defined as a subset of P where at each configuration, the same face of H is in contact with the support surface.", "For convenience, we will also say that two object transformations are in the same placement class if at both transformations, the same face of the convex hull H is in contact with the support surface." ]
[ "grasp classes" ]
background
{ "title": "A Certified-Complete Bimanual Manipulation Planner Puttichai Lertkultanon", "abstract": "Planning motions for two robot arms to move an object collaboratively is a difficult problem, mainly because of the closed-chain constraint, which arises whenever two robot hands simultaneously grasp a single rigid object. In this paper, we propose a manipulation planning algorithm to bring an object from an initial stable placement (position and orientation of the object on the support surface) towards a goal stable placement. The key specificity of our algorithm is that it is certified-complete: for a given object and a given environment, we provide a certificate that the algorithm will find a solution to any bimanual manipulation query in that environment whenever one exists. Moreover, the certificate is constructive: at run-time, it can be used to quickly find a solution to a given query. The algorithm is tested in software and hardware on a number of large pieces of furniture. Note to Practitioners-This paper presents an algorithm to solve a difficult class of bimanual manipulation planning problems where a movable object can be moved only when grasped by two robots. These problems arise naturally when manipulating a large and/or heavy object such as a piece of furniture. With a given object and environment, we provide a method to compute a certificate that the algorithm will find a solution to any bimanual manipulation query in that environment whenever one exists. The certificate can also be used to quickly construct a solution to a given query. The algorithm is tested in software and hardware on a number of large pieces of furniture." }
{ "title": "A Single-Query Manipulation Planner", "abstract": "Abstract-In manipulation tasks, a robot interacts with movable object(s). The configuration space in manipulation planning is thus the Cartesian product of the configuration space of the robot with those of the movable objects. It is the complex structure of such a \"composite configuration space\" that makes manipulation planning particularly challenging. Previous works approximate the connectivity of the composite configuration space by means of discretization or by creating random roadmaps. Such approaches involve an extensive preprocessing phase, which furthermore has to be redone each time the environment changes. In this letter, we propose a high-level Grasp-Placement Table similar to that proposed by Tournassoud et al. (1987), but which does not require any discretization or heavy pre-processing. The table captures the potential connectivity of the composite configuration space while being specific only to the movable objects: in particular, it does not require to be recomputed when the environment changes. During the query phase, the table is used to guide a tree-based planner that explores the space systematically. Our simulations and experiments show that the proposed method enables improvements in both running time and trajectory quality as compared to existing approaches." }
1007.2818
1012.4189
Modeling Complex Systems
Some optimization problems, such as the optimization of logistic or traffic signal operations, are algorithmically complex #REFR .
[ "These fields have considerably advanced our understanding of complex systems.", "In this connection, one should be aware that the term \"complexity\" is used in many different ways.", "In the following, we will distinguish three kinds of complexity:", "1. structural, 2. dynamical, and 3. functional complexity.", "One could also add algorithmic complexity, which is given by the amount of computational time needed to solve certain problems." ]
[ "Linear models are not considered to be complex, no matter how many terms they contain.", "An example for structural complexity is a car or airplane.", "They are constructed in a way that is dynamically more or less deterministic and well controllable, i.e.", "dynamically simple, and they also serve relatively simple functions (the motion from a location A to another location B).", "While the acceleration of a car or a periodic oscillation would be an example for a simple dynamics, examples for complex dynamical behavior are non-periodic changes, deterministic chaos, or history-dependent behaviors." ]
[ "traffic signal operations", "logistic" ]
background
{ "title": "Pluralistic Modeling of Complex Systems", "abstract": "The modeling of complex systems such as ecological or socio-economic systems can be very challenging. Although various modeling approaches exist, they are generally not compatible and mutually consistent, and empirical data often do not allow one to decide what model is the right one, the best one, or most appropriate one. Moreover, as the recent financial and economic crisis shows, relying on a single, idealized model can be very costly. This contribution tries to shed new light on problems that arise when complex systems are modeled. While the arguments can be transferred to many different systems, the related scientific challenges are illustrated for social, economic, and traffic systems. The contribution discusses issues that are sometimes overlooked and tries to overcome some frequent misunderstandings and controversies of the past. At the same time, it is highlighted how some long-standing scientific puzzles may be solved by considering non-linear models of heterogeneous agents with spatio-temporal interactions. As a result of the analysis, it is concluded that a paradigm shift towards a pluralistic or possibilistic modeling approach, which integrates multiple world views, is overdue. In this connection, it is argued that it can be useful to combine many different approaches to obtain a good picture of reality, even though they may be inconsistent. Finally, it is identified what would be profitable areas of collaboration between the socio-economic, natural, and engineering sciences." }
{ "title": "BioLogistics and the Struggle for Efficiency: Concepts and Perspectives", "abstract": "The growth of world population, limitation of resources, economic problems and environmental issues force engineers to develop increasingly efficient solutions for logistic systems. Pure optimization for efficiency, however, has often led to technical solutions that are vulnerable to variations in supply and demand, and to perturbations. In contrast, nature already provides a large variety of efficient, flexible and robust logistic solutions. Can we utilize biological principles to design systems, which can flexibly adapt to hardly predictable, fluctuating conditions? We propose a bio-inspired \"BioLogistics\" approach to deduce dynamic organization processes and principles of adaptive self-control from biological systems, and to transfer them to man-made logistics (including nanologistics), using principles of modularity, self-assembly, self-organization, and decentralized coordination. Conversely, logistic models can help revealing the logic of biological processes at the systems level. Keywords: logistics; transportation; bio-inspired solutions; robustness; self-control; modularity. When the newly built Heathrow terminal 5 went into operation in 2008, it marked the beginning of a disaster: Thousands of lost luggage items were piling up rapidly, passengers were delayed, etc. It took about a week to fix the problem. In the complex logistic and supply systems of today's highly connected, globalized world, similar systemic failures occur again and again. Triggered by insufficient responses to locally varying supplies or demands, a problem can quickly spread over large parts of the system. Examples for such problems range from blackouts of electric power grids up to the current crises of the automotive industry and the financial sector. This indicates that attempts to create highly efficient systems are often compromised by the sensitivity of large man-made structures to perturbations or varying demands, failures or attacks. Maximizing efficiency and profits often implies that redundancies and safety margins are minimized under the constraint that certain, yet acceptable failure rates are just kept. When connecting such systems to form larger ones, coincidences 1 arXiv:1012.4189v1 [physics.soc-ph]" }
1803.04566
1611.08024
Detecting frequency and phase information with the Compact-CNN
Since the first layer of our Compact-CNN is a temporal convolution (whose weights need to be learned from the data), our network has the capability to learn frequencyspecific temporal filters, including EEG features as shown in our previous work #REFR .
[ "The Convolution Theorem states that convolutions of signals in the time-domain relate to multiplication in the frequency-domain." ]
[ "We showed in Figure 3 that the Compact-CNN model is capable of extracting narrow-band task-specific slowwave and fast-wave frequency activity.", "We believe that our network is also capturing information correlated to frequency through the use of the average-pooling layer in Layer 1 of our model.", "The sequence of operations in Layer 1 (temporal convolution, ELU non-linearity then average-pooling) is similar to the methodology of #OTHEREFR for calculating event-related synchronization and desynchronization features.", "In their work, they narrow-band filter, then square, then average over a moving window the signal to obtain an estimate of frequency power.", "The similarity of this approach to the operations in Layer 1 of the Compact-CNN suggests that our model is calculating features at least correlated to that of frequency power." ]
[ "EEG features" ]
method
{ "title": "Compact Convolutional Neural Networks for Classification of Asynchronous Steady-state Visual Evoked Potentials", "abstract": "Objective. Steady-State Visual Evoked Potentials (SSVEPs) are neural oscillations from the parietal and occipital regions of the brain that are evoked from flickering visual stimuli. SSVEPs are robust signals measurable in the electroencephalogram (EEG) and are commonly used in brain-computer interfaces (BCIs). However, methods for high-accuracy decoding of SSVEPs usually require hand-crafted approaches that leverage domain-specific knowledge of the stimulus signals, such as specific temporal frequencies in the visual stimuli and their relative spatial arrangement. When this knowledge is unavailable, such as when SSVEP signals are acquired asynchronously, such approaches tend to fail. Approach. In this paper, we show how a compact convolutional neural network (Compact-CNN), which only requires raw EEG signals for automatic feature extraction, can be used to decode signals from a 12-class SSVEP dataset without the need for user-specific calibration. Main results. The Compact-CNN demonstrates across subject mean accuracy of approximately 80 %, out-performing current state-of-the-art, handcrafted approaches using canonical correlation analysis (CCA) and Combined-CCA. Furthermore, the Compact-CNN approach can reveal the underlying feature representation, revealing that the deep learner extracts additional phase-and amplitude-related features associated with the structure of the dataset. Significance. We discuss how our Compact-CNN shows promise for BCI applications that allow users to freely gaze/attend to any stimulus at any time (e.g., asynchronous BCI) as well as provides a method for analyzing SSVEP signals in a way that might augment our understanding about the basic processing in the visual cortex." }
{ "title": "EEGNet: A Compact Convolutional Network for EEG-based Brain-Computer Interfaces", "abstract": "Objective: Brain-Computer Interface technologies (BCI) enable the direct communication between humans and computers by analyzing brain measurements, such as electroencephalography (EEG). These technologies have been applied to a variety of domains, including neuroprosthetic control and the monitoring of epileptic seizures. Existing BCI systems primarily use a priori knowledge of EEG features of interest to build machine learning models. Recently, convolutional networks have been used for automatic feature extraction of large image databases, where they have obtained state-of-the-art results. In this work we introduce EEGNet, a compact fully convolutional network for EEG-based BCIs developed using Deep Learning approaches. Methods: EEGNet is a 4-layer convolutional network that uses filter factorization for learning a compact representation of EEG time series. EEGNet is one of the smallest convolutional networks to date, having less than 2200 parameters for a binary classification. Results: We show state-of-the-art classification performance across four different BCI paradigms: P300 event-related potential, error-related negativity, movement-related cortical potential, and sensory motor rhythm, with as few as 500 EEG trials. We also show that adding more trials reduces the error variance of prediction rather than improving classification performance. Conclusion: We provide preliminary evidence suggesting that our model can be used with small EEG databases while improving upon the state-of-the-art performance across several tasks and across subjects. Significance: The EEGNet neural network architecture provides state-of-the-art performance across several tasks and across subjects, challenging the notion that large datasets are required to obtain optimal performance." }
1805.01667
1611.08024
III. PREPROCESSING, DECODING & STATISTICS
Additionally, we used a 34-layered ResNet architecture 2 and the compact EEGNet architecture was reimplemented as described in #REFR , as the EEGNet code from the original publication was not available.
[ "Each electrode was then assigned to a specific brain region by calculating cytoarchitectonic probabilistic maps in the SPM anatomy toolbox #OTHEREFR .", "Intracranial EEG data were re-referenced bipolarly between the respective neighbors to be specific for local effects and reduce external noise contamination, and resampled to 250 Hz.", "Other than that, the EEG data were only minimally pre-processed, as described in #OTHEREFR , to operate under application-oriented conditions.", "We used open-source python implementations for both rLDA #OTHEREFR and CNN classifiers.", "Deep4Net and ShallowNet architectures were employed as described in #OTHEREFR and available in the Braindecode Toolbox #OTHEREFR ." ]
[ "As optimizer, we used AdamW #OTHEREFR with cosine annealing #OTHEREFR , a weight decay of 0.002 and an initial learning rate of 0.01 32 .", "For each recording, the first 60 % of the data was used for training, and the last 40 % were reserved as final evaluation set, which was only used to test the final accuracies.", "Statistical significance of the single-channel classifications was evaluated by randomly permuting the true labels of the test set 10 6 times to generate a null distribution.", "For significance of brain region accuracy averages and classifier comparisons, a Wilcoxon signed rank test was employed #OTHEREFR .", "The classification of errors is typically a problem with a strong trial imbalance, as correct trials occur far more often in realistic applications." ]
[ "34-layered ResNet architecture", "EEGNet code" ]
method
{ "title": "Intracranial Error Detection via Deep Learning", "abstract": "Abstract-Deep learning techniques have revolutionized the field of machine learning and were recently successfully applied to various classification problems in noninvasive electroencephalography (EEG). However, these methods were so far only rarely evaluated for use in intracranial EEG. We employed convolutional neural networks (CNNs) to classify and characterize the error-related brain response as measured in 24 intracranial EEG recordings. Decoding accuracies of CNNs were significantly higher than those of a regularized linear discriminant analysis. Using time-resolved deep decoding, it was possible to classify errors in various regions in the human brain, and further to decode errors over 200 ms before the actual erroneous button press, e.g., in the precentral gyrus. Moreover, deeper networks performed better than shallower networks in distinguishing correct from error trials in all-channel decoding. In single recordings, up to 100 % decoding accuracy was achieved. Visualization of the networks' learned features indicated that multivariate decoding on an ensemble of channels yields related, albeit non-redundant information compared to single-channel decoding. In summary, here we show the usefulness of deep learning for both intracranial error decoding and mapping of the spatio-temporal structure of the human error processing network." }
{ "title": "EEGNet: A Compact Convolutional Network for EEG-based Brain-Computer Interfaces", "abstract": "Objective: Brain-Computer Interface technologies (BCI) enable the direct communication between humans and computers by analyzing brain measurements, such as electroencephalography (EEG). These technologies have been applied to a variety of domains, including neuroprosthetic control and the monitoring of epileptic seizures. Existing BCI systems primarily use a priori knowledge of EEG features of interest to build machine learning models. Recently, convolutional networks have been used for automatic feature extraction of large image databases, where they have obtained state-of-the-art results. In this work we introduce EEGNet, a compact fully convolutional network for EEG-based BCIs developed using Deep Learning approaches. Methods: EEGNet is a 4-layer convolutional network that uses filter factorization for learning a compact representation of EEG time series. EEGNet is one of the smallest convolutional networks to date, having less than 2200 parameters for a binary classification. Results: We show state-of-the-art classification performance across four different BCI paradigms: P300 event-related potential, error-related negativity, movement-related cortical potential, and sensory motor rhythm, with as few as 500 EEG trials. We also show that adding more trials reduces the error variance of prediction rather than improving classification performance. Conclusion: We provide preliminary evidence suggesting that our model can be used with small EEG databases while improving upon the state-of-the-art performance across several tasks and across subjects. Significance: The EEGNet neural network architecture provides state-of-the-art performance across several tasks and across subjects, challenging the notion that large datasets are required to obtain optimal performance." }
1805.04157
1611.08024
II. RELATED WORK
The authors in #REFR introduce EEGNet, a CNN model for wet-EEG data across paradigms.
[ "The data from the stimuli is pre-processed for all approaches with the CNN-1 method providing the best accuracy results across both EEG data genres.", "A five class SSVEP signal problem is classified using both traditional machine learning approaches and deep learning #OTHEREFR .", "The authors analyse the dataset from the Physionet [18] which used the traditional wet-EEG with five flickering stimuli frequencies.", "These authors proposed CNN and RNN with Long-short Term Memory (LSTM) for the deep learning methods against traditional classifiers like k-Nearest Neighbour (k-NN), Multi-layer Perceptron (MLP), decision trees and SVM.", "Within all the classifiers, CNN outperformed other approaches with a mean accuracy of 69.03% and within the traditional classifiers, SVM provided the best overall accuracy." ]
[ "The paper includes four datasets for four different paradigms (P300 Event-Related Potential, Error-Related Negativity, Movement-Related Cortical Potential, and Sensorimotor Rhythm).", "All the datasets come from different sources with different data sizes.", "These authors pre-process the data before training the datasets using different approaches including both shallow CNN and deep CNN for within subject classification and across subject classification and for all four paradigms.", "Inconclusively, the results demonstrate that different paradigms perform differently for every approach.", "In contrast to these earlier works, we explicitly consider an end-to-end approach, without the need for EEG signal pre-processing, to tackle single subject, multiple subject and unseen subject SSVEP-based dry-EEG signal classification challenges." ]
[ "wet-EEG data" ]
background
{ "title": "On the Classification of SSVEP-Based Dry-EEG Signals via Convolutional Neural Networks", "abstract": "Abstract-Electroencephalography (EEG) is a common signal acquisition approach employed for Brain-Computer Interface (BCI) research. Nevertheless, the majority of EEG acquisition devices rely on the cumbersome application of conductive gel (so-called wet-EEG) to ensure a high quality signal is obtained. However, this process is unpleasant for the experimental participants and thus limits the practical application of BCI. In this work, we explore the use of a commercially available dry-EEG headset to obtain visual cortical ensemble signals. Whilst improving the usability of EEG within the BCI context, dry-EEG suffers from inherently reduced signal quality due to the lack of conduit gel, making the classification of such signals significantly more challenging. In this paper, we propose a novel Convolutional Neural Network (CNN) approach for the classification of raw dry-EEG signals without any data pre-processing. To illustrate the effectiveness of our approach, we utilise the Steady State Visual Evoked Potential (SSVEP) paradigm as our use case. SSVEP can be utilised to allow people with severe physical disabilities such as Complete Locked-In Syndrome or Amyotrophic Lateral Sclerosis to be aided via BCI applications, as it requires only the subject to fixate upon the sensory stimuli of interest. Here we utilise SSVEP flicker frequencies between 10 to 30 Hz, which we record as subject cortical waveforms via the dry-EEG headset. Our proposed end-to-end CNN allows us to automatically and accurately classify SSVEP stimulation directly from the dry-EEG waveforms. Our CNN architecture utilises a common SSVEP Convolutional Unit (SCU), comprising of a 1D convolutional layer, batch normalization and max pooling. Furthermore, we compare several deep learning neural network variants with our primary CNN architecture, in addition to traditional machine learning classification approaches. Experimental evaluation shows our CNN architecture to be significantly better than competing approaches, achieving a classification accuracy of 96% whilst demonstrating superior cross-subject performance and even being able to generalise well to unseen subjects whose data is entirely absent from the training process." }
{ "title": "EEGNet: A Compact Convolutional Network for EEG-based Brain-Computer Interfaces", "abstract": "Objective: Brain-Computer Interface technologies (BCI) enable the direct communication between humans and computers by analyzing brain measurements, such as electroencephalography (EEG). These technologies have been applied to a variety of domains, including neuroprosthetic control and the monitoring of epileptic seizures. Existing BCI systems primarily use a priori knowledge of EEG features of interest to build machine learning models. Recently, convolutional networks have been used for automatic feature extraction of large image databases, where they have obtained state-of-the-art results. In this work we introduce EEGNet, a compact fully convolutional network for EEG-based BCIs developed using Deep Learning approaches. Methods: EEGNet is a 4-layer convolutional network that uses filter factorization for learning a compact representation of EEG time series. EEGNet is one of the smallest convolutional networks to date, having less than 2200 parameters for a binary classification. Results: We show state-of-the-art classification performance across four different BCI paradigms: P300 event-related potential, error-related negativity, movement-related cortical potential, and sensory motor rhythm, with as few as 500 EEG trials. We also show that adding more trials reduces the error variance of prediction rather than improving classification performance. Conclusion: We provide preliminary evidence suggesting that our model can be used with small EEG databases while improving upon the state-of-the-art performance across several tasks and across subjects. Significance: The EEGNet neural network architecture provides state-of-the-art performance across several tasks and across subjects, challenging the notion that large datasets are required to obtain optimal performance." }
1809.00929
1611.08024
Introduction
It extends EEGNet #REFR , a convolutional neural network (CNN) originally designed for classification problems in brain-computer interface (BCI), to regression problems. 2.
[ "This paper focuses on the contact sensor based detection approaches. More specifically, we consider EEG-based driver drowsiness detection.", "The main reason is that EEG signals, which directly measure the brain state, have the potential to predict the drowsiness before it reaches a dangerous level.", "Hence, compared with other approaches, there is ample time to alert the driver to avoid accidents.", "There has been research on using deep learning #OTHEREFR for driver drowsiness classification. This paper considers regression instead of classification. It makes the following three contributions:", "1." ]
[ "It uses spectral meta-learner for regression (SMLR) #OTHEREFR , an unsupervised ensemble regression approach, to aggregate multiple EEGNet regression models for improved performance. 3.", "Instead of using raw EEG signals as the input to EEGNet, it uses their power spectral density (PSD) at certain frequencies as the input, which significantly saves the computational cost, and also improves the regression performance.", "The remainder of this paper is organized as follows: Section 2 introduces our proposed EEGNet-PSD-SMLR approach.", "Section 3 presents the details of a drowsy driving experiment in a virtual reality (VR) environment, and the performance comparison of EEGNet-PSD-SMLR with several other approaches.", "Finally, Section 4 draws conclusions and points out a future research direction." ]
[ "brain-computer interface", "BCI" ]
method
{ "title": "A ug 2 01 8 EEG-Based Driver Drowsiness Estimation Using Convolutional Neural Networks", "abstract": "Abstract. Deep learning, including convolutional neural networks (CNNs), has started finding applications in brain-computer interfaces (BCIs). However, so far most such approaches focused on BCI classification problems. This paper extends EEGNet, a 3-layer CNN model for BCI classification, to BCI regression, and also utilizes a novel spectral meta-learner for regression (SMLR) approach to aggregate multiple EEGNets for improved performance. Our model uses the power spectral density (PSD) of EEG signals as the input. Compared with raw EEG inputs, the PSD inputs can reduce the computational cost significantly, yet achieve much better regression performance. Experiments on driver drowsiness estimation from EEG signals demonstrate the outstanding performance of our approach." }
{ "title": "EEGNet: A Compact Convolutional Network for EEG-based Brain-Computer Interfaces", "abstract": "Objective: Brain-Computer Interface technologies (BCI) enable the direct communication between humans and computers by analyzing brain measurements, such as electroencephalography (EEG). These technologies have been applied to a variety of domains, including neuroprosthetic control and the monitoring of epileptic seizures. Existing BCI systems primarily use a priori knowledge of EEG features of interest to build machine learning models. Recently, convolutional networks have been used for automatic feature extraction of large image databases, where they have obtained state-of-the-art results. In this work we introduce EEGNet, a compact fully convolutional network for EEG-based BCIs developed using Deep Learning approaches. Methods: EEGNet is a 4-layer convolutional network that uses filter factorization for learning a compact representation of EEG time series. EEGNet is one of the smallest convolutional networks to date, having less than 2200 parameters for a binary classification. Results: We show state-of-the-art classification performance across four different BCI paradigms: P300 event-related potential, error-related negativity, movement-related cortical potential, and sensory motor rhythm, with as few as 500 EEG trials. We also show that adding more trials reduces the error variance of prediction rather than improving classification performance. Conclusion: We provide preliminary evidence suggesting that our model can be used with small EEG databases while improving upon the state-of-the-art performance across several tasks and across subjects. Significance: The EEGNet neural network architecture provides state-of-the-art performance across several tasks and across subjects, challenging the notion that large datasets are required to obtain optimal performance." }
1907.01332
1611.08024
Related Work
To attain a level that is comparable to the model introduced by Lawhern et al. #REFR , Schirrmeister et al.
[ "Finally they introduce shallow ConvNets, which are inspired by the Filter Bank Common Spatial Patterns (FBCSP) #OTHEREFR pipeline, specifically tailored to decode band power features in EEG signals.", "The main difference with the two previous works is the introduction of a technique for data augmentation inspired by the field of computer vision, called cropped training strategy.", "The authors augment the data by creating one crop per time-step in the the EEG trial time series.", "Following this strategy they adopt a cropping approach that leads to 625 crops per epoch.", "The motivation of the authors for implementing aggressive cropping on the training data, is to force the model to put emphasis on the features that are present in all crops of the trail." ]
[ "had to train the model on a supplementary dataset, called the high-gamma dataset, which sees a four fold increase in datapoints in comparison to the dataset of Lawhern et al. #OTHEREFR .", "This addition of new data shows that their model requires a large number of training examples, which is not always available.", "Spectral Transfer using Information Geometry (STIG) introduced by Waytowich et al. #OTHEREFR addresses transferability using an unsupervised technique.", "STIG ranks and combines unlabeled predictions from an ensemble of information geometry classifiers, built on data from individual training subjects.", "This method outperforms existing calibration-free techniques as well as traditional within-subject calibration techniques when limited data is available." ]
[ "model" ]
method
{ "title": "Applying Transfer Learning To Deep Learned Models For EEG Analysis", "abstract": "The introduction of deep learning and transfer learning techniques in fields such as computer vision allowed a leap forward in the accuracy of image classification tasks. Currently there is only limited use of such techniques in neuroscience. The challenge of using deep learning methods to successfully train models in neuroscience, lies in the complexity of the information that is processed, the availability of data and the cost of producing sufficient high quality annotations. Inspired by its application in computer vision, we introduce transfer learning on electrophysiological data to enable training a model with limited amounts of data. Our method was tested on the dataset of the BCI competition IV 2a and compared to the top results that were obtained using traditional machine learning techniques. Using our DL model we outperform the top result of the competition by 33%. We also explore transferability of knowledge between trained models over different experiments, called inter-experimental transfer learning. This reduces the amount of required data even further and is especially useful when few subjects are available. This method is able to outperform the standard deep learning methods used in the BCI competition IV 2b approaches by 18%. In this project we propose a method that can produce reliable electroencephalography (EEG) signal classification, based on modest amounts of training data through the use of transfer learning." }
{ "title": "EEGNet: A Compact Convolutional Network for EEG-based Brain-Computer Interfaces", "abstract": "Objective: Brain-Computer Interface technologies (BCI) enable the direct communication between humans and computers by analyzing brain measurements, such as electroencephalography (EEG). These technologies have been applied to a variety of domains, including neuroprosthetic control and the monitoring of epileptic seizures. Existing BCI systems primarily use a priori knowledge of EEG features of interest to build machine learning models. Recently, convolutional networks have been used for automatic feature extraction of large image databases, where they have obtained state-of-the-art results. In this work we introduce EEGNet, a compact fully convolutional network for EEG-based BCIs developed using Deep Learning approaches. Methods: EEGNet is a 4-layer convolutional network that uses filter factorization for learning a compact representation of EEG time series. EEGNet is one of the smallest convolutional networks to date, having less than 2200 parameters for a binary classification. Results: We show state-of-the-art classification performance across four different BCI paradigms: P300 event-related potential, error-related negativity, movement-related cortical potential, and sensory motor rhythm, with as few as 500 EEG trials. We also show that adding more trials reduces the error variance of prediction rather than improving classification performance. Conclusion: We provide preliminary evidence suggesting that our model can be used with small EEG databases while improving upon the state-of-the-art performance across several tasks and across subjects. Significance: The EEGNet neural network architecture provides state-of-the-art performance across several tasks and across subjects, challenging the notion that large datasets are required to obtain optimal performance." }
1807.11752
1611.08024
IV. DISCUSSION
In #REFR a 68% accuracy is reported on sensorimotor rhythms using 3 convolution layers offline.
[ "However, compared to ours, their architecture would have required longer training times, was not tested online and was applied to a memory paradigm.", "Training CNNs depends very much on the initialisation when data is scarce. In contrast to Schirrmeister et al.", "#OTHEREFR , we did observe certain instabilities in the training and averaged performance measures over several random initialisation seeds in order to get robust estimates.", "The same authors found that a shallower architecture of one convolution layer provided similar results than another one with 4 convolution layers, achieving 84% accuracy offline.", "This was followed by a 76.7% accuracy on the online implementation of the combination of the deep and shallow architectures (or hybrid architecture) of the previous authors by Burget et al. #OTHEREFR ." ]
[ "In the mentioned studies no artefact correction is applied to the input, neural sources are rather checked a posteriori.", "Cybathlon rules out this possibility requiring an online implementation for artefact correction which could be a reason for our lower performances.", "Indeed, our average race completion time (147s) is comparable to those published for the Cybathlon's BCI-Race 2016, where races are finished in between 140 and 180s as reported in #OTHEREFR .", "In particular, offline accuracies without prior ICA correction reached up to 90% average accuracy for our more complex 3D-SmallNet architecture.", "We used the same ICA matrix computed on video-recorded data for all studies to make sure that all data was preprocessed equally." ]
[ "sensorimotor rhythms" ]
method
{ "title": "Compact Convolutional Neural Networks for Multi-Class, Personalised, Closed-Loop EEG-BCI", "abstract": "For many people suffering from motor disabilities, assistive devices controlled with only brain activity are the only way to interact with their environment [1] . Natural tasks often require different kinds of interactions, involving different controllers the user should be able to select in a self-paced way. We developed a Brain-Computer Interface (BCI) allowing users to switch between four control modes in a self-paced way in real-time. Since the system is devised to be used in domestic environments in a user-friendly way, we selected non-invasive electroencephalographic (EEG) signals and convolutional neural networks (CNNs), known for their ability to find the optimal features in classification tasks. We tested our system using the Cybathlon BCI computer game, which embodies all the challenges inherent to real-time control. Our preliminary results show that an efficient architecture (SmallNet), with only one convolutional layer, can classify 4 mental activities chosen by the user. The BCI system is run and validated online. It is kept up-to-date through the use of newly collected signals along playing, reaching an online accuracy of 47.6% where most approaches only report results obtained offline. We found that models trained with data collected online better predicted the behaviour of the system in real-time. This suggests that similar (CNN based) offline classifying methods found in the literature might experience a drop in performance when applied online. Compared to our previous decoder of physiological signals relying on blinks, we increased by a factor 2 the amount of states among which the user can transit, bringing the opportunity for finer control of specific subtasks composing natural grasping in a self-paced way. Our results are comparable to those showed at the Cybathlon's BCI Race but further improvements on accuracy are required." }
{ "title": "EEGNet: A Compact Convolutional Network for EEG-based Brain-Computer Interfaces", "abstract": "Objective: Brain-Computer Interface technologies (BCI) enable the direct communication between humans and computers by analyzing brain measurements, such as electroencephalography (EEG). These technologies have been applied to a variety of domains, including neuroprosthetic control and the monitoring of epileptic seizures. Existing BCI systems primarily use a priori knowledge of EEG features of interest to build machine learning models. Recently, convolutional networks have been used for automatic feature extraction of large image databases, where they have obtained state-of-the-art results. In this work we introduce EEGNet, a compact fully convolutional network for EEG-based BCIs developed using Deep Learning approaches. Methods: EEGNet is a 4-layer convolutional network that uses filter factorization for learning a compact representation of EEG time series. EEGNet is one of the smallest convolutional networks to date, having less than 2200 parameters for a binary classification. Results: We show state-of-the-art classification performance across four different BCI paradigms: P300 event-related potential, error-related negativity, movement-related cortical potential, and sensory motor rhythm, with as few as 500 EEG trials. We also show that adding more trials reduces the error variance of prediction rather than improving classification performance. Conclusion: We provide preliminary evidence suggesting that our model can be used with small EEG databases while improving upon the state-of-the-art performance across several tasks and across subjects. Significance: The EEGNet neural network architecture provides state-of-the-art performance across several tasks and across subjects, challenging the notion that large datasets are required to obtain optimal performance." }
1912.01171
1611.08024
C. TLM-Based UAP
Specifically, we solve the following optimization problem: #REFR where l(x + v, y) is a loss function, in which y is the (true or predicted) label of example x, C(x, v) the constraint on the perturbation v, and α the regularization coefficient.
[ "Different from the DeepFool-based algorithm, TLM directly optimizes an objective function w.r.t. the UAP by batch gradient descent.", "In white-box attacks, the parameters of the victim model are known and fixed, and hence we can view the UAP as a variable to minimize an objective function on the entire training set.", "Algorithm 2: DeepFool-based algorithm for generating a UAP #OTHEREFR .", "Input: X = {x i } n i=1 , n input examples; k, the classifier; ξ, the maximum ℓ p norm of the UAP; δ, the desired ASR; M , the maximum number of iterations.", "Use DeepFool to compute the minimal perturbation △v i in (4); Update the perturbation by (5):" ]
[ "Our proposed approach is highly flexible, as the attacker can choose different optimizers, loss functions, or constraints, according to the specific task.", "Our approach can be applied to both target and non-target attacks by simply updating the loss function l.", "For non-target attacks, the loss function l can be defined as:", "where p y (x) is the predicted probability corresponding to the true label y.", "We could also use arg max j p j (x), i.e., the predicted label, to replace y if the true label is not available." ]
[ "regularization" ]
method
{ "title": "Universal Adversarial Perturbations for CNN Classifiers in EEG-Based BCIs", "abstract": "Multiple convolutional neural network (CNN) classifiers have been proposed for electroencephalogram (EEG) based brain-computer interfaces (BCIs). However, CNN models have been found vulnerable to universal adversarial perturbations (UAPs), which are small and example-independent, yet powerful enough to degrade the performance of a CNN model, when added to a benign example. This paper proposes a novel total loss minimization (TLM) approach to generate UAPs for EEG-based BCIs. Experimental results demonstrate the effectiveness of TLM on three popular CNN classifiers for both target and non-target attacks. We also verify the transferability of UAPs in EEG-based BCI systems. To our knowledge, this is the first study on UAPs of CNN classifiers in EEG-based BCIs, and also the first study on UAPs for target attacks. UAPs are easy to construct, and can attack BCIs in real-time, exposing a critical security concern of BCIs." }
{ "title": "EEGNet: A Compact Convolutional Network for EEG-based Brain-Computer Interfaces", "abstract": "Objective: Brain-Computer Interface technologies (BCI) enable the direct communication between humans and computers by analyzing brain measurements, such as electroencephalography (EEG). These technologies have been applied to a variety of domains, including neuroprosthetic control and the monitoring of epileptic seizures. Existing BCI systems primarily use a priori knowledge of EEG features of interest to build machine learning models. Recently, convolutional networks have been used for automatic feature extraction of large image databases, where they have obtained state-of-the-art results. In this work we introduce EEGNet, a compact fully convolutional network for EEG-based BCIs developed using Deep Learning approaches. Methods: EEGNet is a 4-layer convolutional network that uses filter factorization for learning a compact representation of EEG time series. EEGNet is one of the smallest convolutional networks to date, having less than 2200 parameters for a binary classification. Results: We show state-of-the-art classification performance across four different BCI paradigms: P300 event-related potential, error-related negativity, movement-related cortical potential, and sensory motor rhythm, with as few as 500 EEG trials. We also show that adding more trials reduces the error variance of prediction rather than improving classification performance. Conclusion: We provide preliminary evidence suggesting that our model can be used with small EEG databases while improving upon the state-of-the-art performance across several tasks and across subjects. Significance: The EEGNet neural network architecture provides state-of-the-art performance across several tasks and across subjects, challenging the notion that large datasets are required to obtain optimal performance." }
2003.02657
1611.08024
I. INTRODUCTION
Basically, most of machine learning-based BCI methods follow these processes, however, these methods need specific modification to classify a user's intention/condition for each different paradigm #REFR .
[ "Evoked BCIs exploit unintentional electrical potentials reacting to external or internal stimuli.", "Examples of evoked BCIs include steady-state visually evoked potentials (SSVEP) #OTHEREFR , #OTHEREFR and event-related potentials #OTHEREFR .", "Additionally, spontaneous BCIs use an internal cognitive process such as event related desynchronization and event related synchronization (ERD/ERS) in sensorimotor rhythms, e.g., motor imagery (MI) #OTHEREFR , #OTHEREFR induced by imagining movements in addition to physical movement.", "Well-known examples of passive BCIs include the use of sleep/drowsy EEG signals for sleep stage classification or identifying mental fatigue to alert a driver of a dangerous situation and seizure EEG patterns for onset detection to provide the patient with a warning of a potential seizure.", "Generally, machine learning-based BCIs consist of five main processing stages #OTHEREFR : (i) an EEG signal acquisition phase based on each paradigm, (ii) signal preprocessing (e.g., channel selection and band-pass filtering), (iii) feature representation learning, (iv) classifier learning, and finally (v) a feedback stage." ]
[ "In other words, machine learningbased methods need to have prior knowledge of different EEG paradigms #OTHEREFR , #OTHEREFR , #OTHEREFR , #OTHEREFR , #OTHEREFR .", "Therefore, conventional machine learning-based BCIs have discovered EEG representations through extremely specialized approaches, e.g., a common spatial pattern (CSP) #OTHEREFR or its variants #OTHEREFR , #OTHEREFR for MI signals and a canonical correlation analysis (CCA) #OTHEREFR for SSVEP signals decoding.", "While hand-crafted feature representation learning has a pivotal role in a conventional machine learning framework #OTHEREFR , #OTHEREFR , #OTHEREFR , deep learning-based representation has had remarkable results in the BCI community #OTHEREFR , #OTHEREFR , #OTHEREFR .", "These deep learning-based methods have integrated a feature extraction step with a classifier learning step such that those steps are jointly optimized, thereby improving performance.", "Among various deep learning methods, convolutional neural networks (CNNs) have the advantage #OTHEREFR , #OTHEREFR , #OTHEREFR of maintaining the structural and configurational information in the original data." ]
[ "machine learning-based BCI" ]
method
{ "title": "Multi-Scale Neural network for EEG Representation Learning in BCI", "abstract": "Recent advances in deep learning have had a methodological and practical impact on brain-computer interface (BCI) research. Among the various deep network architectures, convolutional neural networks (CNNs) have been well suited for spatio-spectral-temporal electroencephalogram (EEG) signal representation learning. Most of the existing CNN-based methods described in the literature extract features at a sequential level of abstraction with repetitive nonlinear operations and involve densely connected layers for classification. However, studies in neurophysiology have revealed that EEG signals carry information in different ranges of frequency components. To better reflect these multi-frequency properties in EEGs, we propose a novel deep multi-scale neural network that discovers feature representations in multiple frequency/time ranges and extracts relationships among electrodes, i.e., spatial representations, for subject intention/condition identification. Furthermore, by completely representing EEG signals with spatio-spectral-temporal information, the proposed method can be utilized for diverse paradigms in both active and passive BCIs, contrary to existing methods that are primarily focused on single-paradigm BCIs. To demonstrate the validity of our proposed method, we conducted experiments on various paradigms of active/passive BCI datasets. Our experimental results demonstrated that the proposed method achieved performance improvements when judged against comparable state-of-the-art methods. Additionally, we analyzed the proposed method using different techniques, such as PSD curves and relevance score inspection to validate the multi-scale EEG signal information capturing ability, activation pattern maps for investigating the learned spatial filters, and t-SNE plotting for visualizing represented features. Finally, we also demonstrated our method's application to real-world problems." }
{ "title": "EEGNet: A Compact Convolutional Network for EEG-based Brain-Computer Interfaces", "abstract": "Objective: Brain-Computer Interface technologies (BCI) enable the direct communication between humans and computers by analyzing brain measurements, such as electroencephalography (EEG). These technologies have been applied to a variety of domains, including neuroprosthetic control and the monitoring of epileptic seizures. Existing BCI systems primarily use a priori knowledge of EEG features of interest to build machine learning models. Recently, convolutional networks have been used for automatic feature extraction of large image databases, where they have obtained state-of-the-art results. In this work we introduce EEGNet, a compact fully convolutional network for EEG-based BCIs developed using Deep Learning approaches. Methods: EEGNet is a 4-layer convolutional network that uses filter factorization for learning a compact representation of EEG time series. EEGNet is one of the smallest convolutional networks to date, having less than 2200 parameters for a binary classification. Results: We show state-of-the-art classification performance across four different BCI paradigms: P300 event-related potential, error-related negativity, movement-related cortical potential, and sensory motor rhythm, with as few as 500 EEG trials. We also show that adding more trials reduces the error variance of prediction rather than improving classification performance. Conclusion: We provide preliminary evidence suggesting that our model can be used with small EEG databases while improving upon the state-of-the-art performance across several tasks and across subjects. Significance: The EEGNet neural network architecture provides state-of-the-art performance across several tasks and across subjects, challenging the notion that large datasets are required to obtain optimal performance." }
2003.02657
1611.08024
A. Linear Models
However, these methods need to have certain prior neurophysiology knowledge #REFR , because their feature extraction stages are specifically designed for each EEG paradigm.
[ "Specifically, these authors #OTHEREFR used filter banks in a channel-wise manner to capture the spatio-spectral information.", "Then, by encoding the temporal evolution of extracted spatio-spectral feature vectors, they #OTHEREFR effectively constructed epileptic seizure EEG signal spatio-spectral-temporal features and classified the seizure and non-seizure features utilizing a support vector machine (SVM).", "Recently, spectral features derived from a principal component analysis (PCA) #OTHEREFR exhibited superior performance for seizure onset detection. In particular, Lee et al. #OTHEREFR band-pass filtered raw signals and calculated PSD.", "Then they #OTHEREFR applied PCA for the extraction of EEG signal spectral features.", "These practical linear model-based BCI methods #OTHEREFR , #OTHEREFR , #OTHEREFR , #OTHEREFR , #OTHEREFR , #OTHEREFR , #OTHEREFR have demonstrated credible performance." ]
[ "Conversely, our method does not need to be specialized for different paradigms." ]
[ "EEG paradigm" ]
method
{ "title": "Multi-Scale Neural network for EEG Representation Learning in BCI", "abstract": "Recent advances in deep learning have had a methodological and practical impact on brain-computer interface (BCI) research. Among the various deep network architectures, convolutional neural networks (CNNs) have been well suited for spatio-spectral-temporal electroencephalogram (EEG) signal representation learning. Most of the existing CNN-based methods described in the literature extract features at a sequential level of abstraction with repetitive nonlinear operations and involve densely connected layers for classification. However, studies in neurophysiology have revealed that EEG signals carry information in different ranges of frequency components. To better reflect these multi-frequency properties in EEGs, we propose a novel deep multi-scale neural network that discovers feature representations in multiple frequency/time ranges and extracts relationships among electrodes, i.e., spatial representations, for subject intention/condition identification. Furthermore, by completely representing EEG signals with spatio-spectral-temporal information, the proposed method can be utilized for diverse paradigms in both active and passive BCIs, contrary to existing methods that are primarily focused on single-paradigm BCIs. To demonstrate the validity of our proposed method, we conducted experiments on various paradigms of active/passive BCI datasets. Our experimental results demonstrated that the proposed method achieved performance improvements when judged against comparable state-of-the-art methods. Additionally, we analyzed the proposed method using different techniques, such as PSD curves and relevance score inspection to validate the multi-scale EEG signal information capturing ability, activation pattern maps for investigating the learned spatial filters, and t-SNE plotting for visualizing represented features. Finally, we also demonstrated our method's application to real-world problems." }
{ "title": "EEGNet: A Compact Convolutional Network for EEG-based Brain-Computer Interfaces", "abstract": "Objective: Brain-Computer Interface technologies (BCI) enable the direct communication between humans and computers by analyzing brain measurements, such as electroencephalography (EEG). These technologies have been applied to a variety of domains, including neuroprosthetic control and the monitoring of epileptic seizures. Existing BCI systems primarily use a priori knowledge of EEG features of interest to build machine learning models. Recently, convolutional networks have been used for automatic feature extraction of large image databases, where they have obtained state-of-the-art results. In this work we introduce EEGNet, a compact fully convolutional network for EEG-based BCIs developed using Deep Learning approaches. Methods: EEGNet is a 4-layer convolutional network that uses filter factorization for learning a compact representation of EEG time series. EEGNet is one of the smallest convolutional networks to date, having less than 2200 parameters for a binary classification. Results: We show state-of-the-art classification performance across four different BCI paradigms: P300 event-related potential, error-related negativity, movement-related cortical potential, and sensory motor rhythm, with as few as 500 EEG trials. We also show that adding more trials reduces the error variance of prediction rather than improving classification performance. Conclusion: We provide preliminary evidence suggesting that our model can be used with small EEG databases while improving upon the state-of-the-art performance across several tasks and across subjects. Significance: The EEGNet neural network architecture provides state-of-the-art performance across several tasks and across subjects, challenging the notion that large datasets are required to obtain optimal performance." }
1707.08262
1611.08024
Related work
Compact CNNs have been proposed to learn representation of EEG for brain computer interface tasks #REFR .
[ "RNNs have had great success in speech recognition, handwriting recognition, and machine translation #OTHEREFR .", "In health care applications, RNNs have also demonstrated success on predictive modeling problems using electronic health records (Choi et al., 2016a,b; #OTHEREFR .", "Following the recent development of deep neural networks, methods have been proposed to learn feature representation from EEG data.", "Recently, a method was proposed to learn an EEG representation by converting the signal into an image using the location of electrodes and applying deep a CNN to the image #OTHEREFR .", "Convolutional neural networks have also been applied to hand-chosen features for epileptic seizure recognition #OTHEREFR ." ]
[ "These successful applications to EEG data suggest that deep learning methods have potential for analyzing EEG data from PSGs to extract efficient representations for automatic sleep-wake stage annotation." ]
[ "EEG", "brain computer interface" ]
method
{ "title": "SLEEPNET: Automated Sleep Staging System via Deep Learning", "abstract": "Sleep disorders, such as sleep apnea, parasomnias, and hypersomnia, affect 50-70 million adults in the United States (Hillman et al., 2006) . Overnight polysomnography (PSG), including brain monitoring using electroencephalography (EEG), is a central component of the diagnostic evaluation for sleep disorders. While PSG is conventionally performed by trained technologists, the recent rise of powerful neural network learning algorithms combined with large physiological datasets offers the possibility of automation, potentially making expert-level sleep analysis more widely available. We propose SLEEPNET (Sleep EEG neural network), a deployed annotation tool for sleep staging. SLEEPNET uses a deep recurrent neural network trained on the largest sleep physiology database assembled to date, consisting of PSGs from over 10,000 patients from the Massachusetts General Hospital (MGH) Sleep Laboratory. SLEEPNET achieves human-level annotation performance on an independent test set of 1,000 EEGs, with an average accuracy of 85.76% and algorithm-expert inter-rater agreement (IRA) of κ = 79.46%, comparable to expert-expert IRA." }
{ "title": "EEGNet: A Compact Convolutional Network for EEG-based Brain-Computer Interfaces", "abstract": "Objective: Brain-Computer Interface technologies (BCI) enable the direct communication between humans and computers by analyzing brain measurements, such as electroencephalography (EEG). These technologies have been applied to a variety of domains, including neuroprosthetic control and the monitoring of epileptic seizures. Existing BCI systems primarily use a priori knowledge of EEG features of interest to build machine learning models. Recently, convolutional networks have been used for automatic feature extraction of large image databases, where they have obtained state-of-the-art results. In this work we introduce EEGNet, a compact fully convolutional network for EEG-based BCIs developed using Deep Learning approaches. Methods: EEGNet is a 4-layer convolutional network that uses filter factorization for learning a compact representation of EEG time series. EEGNet is one of the smallest convolutional networks to date, having less than 2200 parameters for a binary classification. Results: We show state-of-the-art classification performance across four different BCI paradigms: P300 event-related potential, error-related negativity, movement-related cortical potential, and sensory motor rhythm, with as few as 500 EEG trials. We also show that adding more trials reduces the error variance of prediction rather than improving classification performance. Conclusion: We provide preliminary evidence suggesting that our model can be used with small EEG databases while improving upon the state-of-the-art performance across several tasks and across subjects. Significance: The EEGNet neural network architecture provides state-of-the-art performance across several tasks and across subjects, challenging the notion that large datasets are required to obtain optimal performance." }
1909.06970
1611.08024
Within-subject classification
In contrast, in #REFR , the authors reported that they did not perform significantly different amongst themselves.
[ "Table 3 presents the mean AUC values of the within-subject five-fold cross-validation results across all methods, which can be seen in Figure 7 .", "The ANOVA analysis showed that the means AUCs of CNN3 and UCNN3 are significantly different from the rest of the methods.", "The means AUCs of CNN-1 are very similar to those of CNN-R.", "However, in #OTHEREFR , the authors claimed that CNN-R performed significantly better that CNN-1.", "On the other hand, the mean AUC values of Shallow ConvNet are significantly different from the means of Deep ConvNet but not from other methods." ]
[ "More importantly, the means of FCNN and SepConv1D-1F are not significantly different from other methods. Moreover, they are very similar to them." ]
[ "contrast" ]
result
{ "title": "A few filters are enough: Convolutional Neural Network for P300 Detection", "abstract": "In this paper, we aim to provide elements to contribute to the discussion about the usefulness of deep CNNs with several filters to solve both within-subject and cross-subject classification for single-trial P300 detection. To that end, we present SepConv1D, a simple Convolutional Neural Network architecture consisting of a depthwise separable This is important because simpler, cheaper, faster and, thus, more portable devices can be built." }
{ "title": "EEGNet: A Compact Convolutional Network for EEG-based Brain-Computer Interfaces", "abstract": "Objective: Brain-Computer Interface technologies (BCI) enable the direct communication between humans and computers by analyzing brain measurements, such as electroencephalography (EEG). These technologies have been applied to a variety of domains, including neuroprosthetic control and the monitoring of epileptic seizures. Existing BCI systems primarily use a priori knowledge of EEG features of interest to build machine learning models. Recently, convolutional networks have been used for automatic feature extraction of large image databases, where they have obtained state-of-the-art results. In this work we introduce EEGNet, a compact fully convolutional network for EEG-based BCIs developed using Deep Learning approaches. Methods: EEGNet is a 4-layer convolutional network that uses filter factorization for learning a compact representation of EEG time series. EEGNet is one of the smallest convolutional networks to date, having less than 2200 parameters for a binary classification. Results: We show state-of-the-art classification performance across four different BCI paradigms: P300 event-related potential, error-related negativity, movement-related cortical potential, and sensory motor rhythm, with as few as 500 EEG trials. We also show that adding more trials reduces the error variance of prediction rather than improving classification performance. Conclusion: We provide preliminary evidence suggesting that our model can be used with small EEG databases while improving upon the state-of-the-art performance across several tasks and across subjects. Significance: The EEGNet neural network architecture provides state-of-the-art performance across several tasks and across subjects, challenging the notion that large datasets are required to obtain optimal performance." }
1906.00540
1302.0698
The fractional Laplace operator. Since
Before describing the aforementioned solution technique, we briefly recall the finite element approximation of #REFR for the state equation (2.4).
[ "3.3.", "A fully discrete scheme for the fractional optimal control problem.", "In what follows we briefly recall the fully discrete scheme proposed in #OTHEREFR and review its a priori error analysis.", "To accomplish this task, we will assume in this section that", "This regularity assumption holds if, for instance, Ω is convex #OTHEREFR ." ]
[ "Let T Ω = {K} be a conforming and shape regular mesh of Ω into cells K that are isoparametrically equivalent either to the unit cube [0, #OTHEREFR n or the unit simplex in R n #OTHEREFR .", "Let I Y be a partition of [0, Y ] with mesh points", "We construct a mesh T Y over the cylinder C Y as T Y = T Ω ⊗ I Y , the tensor product triangulation of T Ω and I Y .", "The set of all the obtained meshes is denoted by T.", "Notice that, owing to (3.14), the meshes T Y are not shape regular but satisfy: if" ]
[ "finite element approximation" ]
method
{ "title": "An adaptive finite element method for the sparse optimal control of fractional diffusion", "abstract": "Abstract. We propose and analyze an a posteriori error estimator for a PDE-constrained optimization problem involving a nondifferentiable cost functional, fractional diffusion, and controlconstraints. We realize fractional diffusion as the Dirichlet-to-Neumann map for a nonuniformly PDE and propose an equivalent optimal control problem with a local state equation. For such an equivalent problem, we design an a posteriori error estimator which can be defined as the sum of four contributions: two contributions related to the approximation of the state and adjoint equations and two contributions that account for the discretization of the control variable and its associated subgradient. The contributions related to the discretization of the state and adjoint equations rely on anisotropic error estimators in weighted Sobolev spaces. We prove that the proposed a posteriori error estimator is locally efficient and, under suitable assumptions, reliable. We design an adaptive scheme that yields, for the examples that we perform, optimal experimental rates of convergence." }
{ "title": "A PDE approach to fractional diffusion in general domains: a priori error analysis", "abstract": "Abstract. The purpose of this work is the study of solution techniques for problems involving fractional powers of symmetric coercive elliptic operators in a bounded domain with Dirichlet boundary conditions. These operators can be realized as the Dirichlet to Neumann map for a degenerate/singular elliptic problem posed on a semi-infinite cylinder, which we analyze in the framework of weighted Sobolev spaces. Motivated by the rapid decay of the solution of this problem, we propose a truncation that is suitable for numerical approximation. We discretize this truncation using first degree tensor product finite elements. We derive a priori error estimates in weighted Sobolev spaces. The estimates exhibit optimal regularity but suboptimal order for quasi-uniform meshes. For anisotropic meshes, instead, they are quasi-optimal in both order and regularity. We present numerical experiments to illustrate the method's performance." }
1404.0068
1302.0698
Finite element methods.
The graded meshes described by (4.11) yield near optimal error estimates both in regularity and order for the elliptic case investigated in #REFR .
[ "It is known that the numerical approximation of functions with a strong directional-dependent behavior needs anisotropic elements in order to recover quasi-optimal error estimates #OTHEREFR .", "In our setting, anisotropic elements of tensor product structure are essential.", "Given T Y , we call N (T Y ) the set of its nodes and", "If K is a simplex, then P(T ) = P 1 (K), whereas if K is a n-rectangle, then P(T ) = Q 1 (K).", "We also define U(T Ω ) := tr Ω V(T Y ), i.e., a P 1 finite element space over the mesh T Ω ." ]
[]
[ "graded meshes", "elliptic case" ]
background
{ "title": "A PDE approach to space-time fractional parabolic problems", "abstract": "Abstract. We study solution techniques for parabolic equations with fractional diffusion and Caputo fractional time derivative, the latter being discretized and analyzed in a general Hilbert space setting. The spatial fractional diffusion is realized as the Dirichlet-to-Neumann map for a nonuniformly elliptic problem posed on a semi-infinite cylinder in one more spatial dimension. We write our evolution problem as a quasi-stationary elliptic problem with a dynamic boundary condition. We propose and analyze an implicit fully-discrete scheme: first-degree tensor product finite elements in space and an implicit finite difference discretization in time. We prove stability and error estimates for this scheme." }
{ "title": "A PDE approach to fractional diffusion in general domains: a priori error analysis", "abstract": "Abstract. The purpose of this work is the study of solution techniques for problems involving fractional powers of symmetric coercive elliptic operators in a bounded domain with Dirichlet boundary conditions. These operators can be realized as the Dirichlet to Neumann map for a degenerate/singular elliptic problem posed on a semi-infinite cylinder, which we analyze in the framework of weighted Sobolev spaces. Motivated by the rapid decay of the solution of this problem, we propose a truncation that is suitable for numerical approximation. We discretize this truncation using first degree tensor product finite elements. We derive a priori error estimates in weighted Sobolev spaces. The estimates exhibit optimal regularity but suboptimal order for quasi-uniform meshes. For anisotropic meshes, instead, they are quasi-optimal in both order and regularity. We present numerical experiments to illustrate the method's performance." }
1508.04382
1302.0698
2.2.
We now review the main results of #REFR about the a priori error analysis of discretizations of problem (1.1).
[ "A priori error analysis." ]
[ "This will also serve to make clear the limitations of this theory, thereby justifying the quest for an a posteriori error analysis. In this section we assume that", "(Ω).", "This holds if, for instance, the domain Ω is convex #OTHEREFR .", "Since C is unbounded, problem (", "then the aforementioned problem reads:" ]
[ "discretizations" ]
background
{ "title": "A PDE Approach to Numerical Fractional Diffusion", "abstract": "Abstract. Fractional diffusion has become a fundamental tool for the modeling of multiscale and heterogeneous phenomena. However, due to its nonlocal nature, its accurate numerical approximation is delicate. We survey our research program on the design and analysis of efficient solution techniques for problems involving fractional powers of elliptic operators. Starting from a localization PDE result for these operators, we develop local techniques for their solution: a priori and a posteriori error analyses, adaptivity and multilevel methods. We show the flexibility of our approach by proposing and analyzing local solution techniques for a space-time fractional parabolic equation." }
{ "title": "A PDE approach to fractional diffusion in general domains: a priori error analysis", "abstract": "Abstract. The purpose of this work is the study of solution techniques for problems involving fractional powers of symmetric coercive elliptic operators in a bounded domain with Dirichlet boundary conditions. These operators can be realized as the Dirichlet to Neumann map for a degenerate/singular elliptic problem posed on a semi-infinite cylinder, which we analyze in the framework of weighted Sobolev spaces. Motivated by the rapid decay of the solution of this problem, we propose a truncation that is suitable for numerical approximation. We discretize this truncation using first degree tensor product finite elements. We derive a priori error estimates in weighted Sobolev spaces. The estimates exhibit optimal regularity but suboptimal order for quasi-uniform meshes. For anisotropic meshes, instead, they are quasi-optimal in both order and regularity. We present numerical experiments to illustrate the method's performance." }
1607.07704
1302.0698
Introduction
Such an approach is a more suitable choice for numerical methods, see #REFR for the linear case.
[ "For the latter result, we apply a well-known technique due to Stampacchia.", "However, when Ω has a Lipschitz continuous boundary and f is locally Lipschitz continuous we illustrate the regularity shift.", "For completeness we also derive the Hölder regularity of solution for smooth Ω.", "Numerical realization of nonlocal operators poses various challenges for instance, direct discretization of (1.1), by using finite elements, requires access to eigenvalues and eigenvectors of (−∆ D ) which is an intractable problem in general domains.", "Instead we use the so-called Caffarelli-Silvestre extension to realize the fractional power (−∆ D ) s ." ]
[ "The extension idea was introduced by Caffarelli and Silvestre in R N #OTHEREFR and its extensions to bounded domains is given in e.g. #OTHEREFR .", "The extension says that fractional powers (−∆ D ) s of the spatial operator −∆ D can be realized as an operator that maps a Dirichlet boundary condition to a Neumann condition via an extension problem on the semi-infinite cylinder C = Ω × (0, ∞), that is, a Dirichlet-to-Neumann operator. See Section 3 for more details.", "We derive a priori finite element error estimates for our numerical scheme.", "Our proof requires the solution to a discrete linearized problem to be uniformly bounded in L ∞ (Ω), which can be readily derived by using the inverse estimates and under the assumption s > (N − 2)/2.", "As a result, when N ≥ 3, we only have error estimates in case s > (N − 2)/2." ]
[ "numerical methods" ]
method
{ "title": "A note on semilinear fractional elliptic equation: analysis and discretization", "abstract": "Abstract In this paper we study existence, regularity, and approximation of solution to a fractional semilinear elliptic equation of order s ∈ (0, 1). We identify minimal conditions on the nonlinear term and the source which leads to existence of weak solutions and uniform L ∞ -bound on the solutions. Next we realize the fractional Laplacian as a Dirichlet-to-Neumann map via the Caffarelli-Silvestre extension. We introduce a firstdegree tensor product finite elements space to approximate the truncated problem. We derive a priori error estimates and conclude with an illustrative numerical example." }
{ "title": "A PDE approach to fractional diffusion in general domains: a priori error analysis", "abstract": "Abstract. The purpose of this work is the study of solution techniques for problems involving fractional powers of symmetric coercive elliptic operators in a bounded domain with Dirichlet boundary conditions. These operators can be realized as the Dirichlet to Neumann map for a degenerate/singular elliptic problem posed on a semi-infinite cylinder, which we analyze in the framework of weighted Sobolev spaces. Motivated by the rapid decay of the solution of this problem, we propose a truncation that is suitable for numerical approximation. We discretize this truncation using first degree tensor product finite elements. We derive a priori error estimates in weighted Sobolev spaces. The estimates exhibit optimal regularity but suboptimal order for quasi-uniform meshes. For anisotropic meshes, instead, they are quasi-optimal in both order and regularity. We present numerical experiments to illustrate the method's performance." }
1403.4278
1302.0698
Introduction
The following simple strategy to find the solution of (1.1) has been proposed and analyzed in #REFR : given a sufficiently smooth function f we solve (1.2), thus obtaining a function U = U (x , y).
[ "where ∂ L C = ∂Ω × [0, ∞) denotes the lateral boundary of C, and", "is the the so-called conormal exterior derivative of U with ν being the unit outer normal to C at Ω × {0}.", "The parameter α is defined as (1.4) α = 1 − 2s ∈ (−1, 1).", "Finally, d s is a positive normalization constant which depends only on s; see #OTHEREFR for details.", "We will call y the extended variable and the dimension n + 1 in R n+1 + the extended dimension of problem (1.2)." ]
[ "Setting u : x ∈ Ω → u(x ) = U (x , 0) ∈ R, we obtain the solution of (1.1).", "For an overview of the existing numerical techniques used to solve problems involving fractional diffusion such as the matrix transference technique and the contour integral method, we refer to #OTHEREFR .", "In addition to #OTHEREFR , two other works that deal with the discretization of fractional powers of elliptic operators have subsequently appeared: the approach given by Bonito and Pasciak in #OTHEREFR is based on the integral formulation for self-adjoint operators discussed, for instance, in #OTHEREFR Chapter 10.4] ; the work by del Teso and Vázquez #OTHEREFR studies the approximation of the α-harmonic extension problem via the finite difference method.", "The main advantage of the algorithm proposed in #OTHEREFR , is that we are solving the local problem (1.2) instead of dealing with the nonlocal operator (−∆) s of problem (1.1).", "However, this comes at the expense of incorporating one more dimension to the problem, thus raising the question of how computationally efficient this approach is." ]
[ "sufficiently smooth function" ]
method
{ "title": "Multilevel methods for nonuniformly elliptic operators", "abstract": "Abstract. We develop and analyze multilevel methods for nonuniformly elliptic operators whose ellipticity holds in a weighted Sobolev space with an A 2 -Muckenhoupt weight. Using the so-called Xu-Zikatanov (XZ) identity, we derive a nearly uniform convergence result, under the assumption that the underlying mesh is quasi-uniform. We also consider the so-called α-harmonic extension to localize fractional powers of elliptic operators. Motivated by the scheme proposed in [R.H. Nochetto, E. Otárola, and A.J. Salgado. A PDE approach to fractional diffusion in general domains: a priori error analysis. arXiv:1302.0698, 2013] we present a multilevel method with line smoothers and obtain a nearly uniform convergence result on anisotropic meshes. Numerical experiments reveal a competitive performance of our method." }
{ "title": "A PDE approach to fractional diffusion in general domains: a priori error analysis", "abstract": "Abstract. The purpose of this work is the study of solution techniques for problems involving fractional powers of symmetric coercive elliptic operators in a bounded domain with Dirichlet boundary conditions. These operators can be realized as the Dirichlet to Neumann map for a degenerate/singular elliptic problem posed on a semi-infinite cylinder, which we analyze in the framework of weighted Sobolev spaces. Motivated by the rapid decay of the solution of this problem, we propose a truncation that is suitable for numerical approximation. We discretize this truncation using first degree tensor product finite elements. We derive a priori error estimates in weighted Sobolev spaces. The estimates exhibit optimal regularity but suboptimal order for quasi-uniform meshes. For anisotropic meshes, instead, they are quasi-optimal in both order and regularity. We present numerical experiments to illustrate the method's performance." }
1403.4278
1302.0698
A multigrid method for the fractional Laplace operator on anisotropic meshes
This allows us to recover an almostoptimal error estimate for the finite element approximation of problem (1.2) #REFR Theorem 5.4] .
[ "As we explained in § 3.3, the regularity estimate (3.14) implies the necessity of graded meshes in the extended variable y." ]
[ "In fact, finite elements on quasi-uniform meshes have poor approximation properties for small values of the parameter s.", "The isotropic error estimates of [46, Theorem 5.1] are not optimal, which makes anisotropic estimates essential.", "For this reason, in this section we develop a multilevel theory for problem (1.2) having in mind anisotropic partitions in the extended variable y and the multilevel setting described in Section 4 for the nonuniformly elliptic equation (3.1).", "We shall obtain nearly uniform convergence of a V-cycle multilevel method for the problem (1.2) without any regularity assumptions. We consider line Gauss-Seidel smoothers.", "The analysis is an adaptation of the results presented in #OTHEREFR for anisotropic elliptic equations, and it is again based on the XZ identity #OTHEREFR ." ]
[ "finite element approximation" ]
background
{ "title": "Multilevel methods for nonuniformly elliptic operators", "abstract": "Abstract. We develop and analyze multilevel methods for nonuniformly elliptic operators whose ellipticity holds in a weighted Sobolev space with an A 2 -Muckenhoupt weight. Using the so-called Xu-Zikatanov (XZ) identity, we derive a nearly uniform convergence result, under the assumption that the underlying mesh is quasi-uniform. We also consider the so-called α-harmonic extension to localize fractional powers of elliptic operators. Motivated by the scheme proposed in [R.H. Nochetto, E. Otárola, and A.J. Salgado. A PDE approach to fractional diffusion in general domains: a priori error analysis. arXiv:1302.0698, 2013] we present a multilevel method with line smoothers and obtain a nearly uniform convergence result on anisotropic meshes. Numerical experiments reveal a competitive performance of our method." }
{ "title": "A PDE approach to fractional diffusion in general domains: a priori error analysis", "abstract": "Abstract. The purpose of this work is the study of solution techniques for problems involving fractional powers of symmetric coercive elliptic operators in a bounded domain with Dirichlet boundary conditions. These operators can be realized as the Dirichlet to Neumann map for a degenerate/singular elliptic problem posed on a semi-infinite cylinder, which we analyze in the framework of weighted Sobolev spaces. Motivated by the rapid decay of the solution of this problem, we propose a truncation that is suitable for numerical approximation. We discretize this truncation using first degree tensor product finite elements. We derive a priori error estimates in weighted Sobolev spaces. The estimates exhibit optimal regularity but suboptimal order for quasi-uniform meshes. For anisotropic meshes, instead, they are quasi-optimal in both order and regularity. We present numerical experiments to illustrate the method's performance." }
1706.04066
1302.0698
Introduction
Thereby, in Section 4.1, we mainly recover the results of #REFR . The reason for doing this is twofold.
[ "Numerical approaches for the integral definition of the fractional Laplacian, which is not equivalent to the spectral definition considered in the present paper, can be found in #OTHEREFR .", "This paper is organized as follows: In Section 2, we state the definition of the fractional Laplacian, formulate the extended problem in detail, and introduce the functional framework needed for the subsequent error analysis.", "Moreover, in this section, we are concerned with several properties of the solution of the extended problem such as a series representation and corresponding regularity results.", "The discrete, extended problem posed on the truncated cylinder is formulated at the beginning of Section 3.", "In the extended direction, we distinguish between graded meshes and h-FEM, and geometric meshes and hp-FEM, see Sections 3.1 and 3.2. The error analysis is given in Section 4." ]
[ "First, we are able to slightly improve the mesh grading condition used in #OTHEREFR .", "However, the main reason to analyze the h-FEM on graded meshes before developing the analysis for the hp-method considered in Section 4.2 is, that the techniques we use are almost identical for both cases, but the details are simpler for h-FEM.", "Implementation aspects and numerical experiments, which underline the efficiency of our approach, are presented in Section 5.", "In the appendix, we collect different results for special functions defined by the modified Bessel functions of second kind.", "These are especially needed in Section 2 for the discussion of the solution of the extended problem." ]
[ "Section" ]
background
{ "title": "$hp$-Finite Elements for Fractional Diffusion", "abstract": "The purpose of this work is to introduce and analyze a numerical scheme to efficiently solve boundary value problems involving the spectral fractional Laplacian. The approach is based on a reformulation of the problem posed on a semi-infinite cylinder in one more spatial dimension. After a suitable truncation of this cylinder, the resulting problem is discretized with linear finite elements in the original domain and with hp-finite elements in the extended direction. The proposed approach yields a drastic reduction of the computational complexity in terms of degrees of freedom and even has slightly improved convergence properties compared to a discretization using linear finite elements for both the original domain and the extended direction. The performance of the method is illustrated by numerical experiments." }
{ "title": "A PDE approach to fractional diffusion in general domains: a priori error analysis", "abstract": "Abstract. The purpose of this work is the study of solution techniques for problems involving fractional powers of symmetric coercive elliptic operators in a bounded domain with Dirichlet boundary conditions. These operators can be realized as the Dirichlet to Neumann map for a degenerate/singular elliptic problem posed on a semi-infinite cylinder, which we analyze in the framework of weighted Sobolev spaces. Motivated by the rapid decay of the solution of this problem, we propose a truncation that is suitable for numerical approximation. We discretize this truncation using first degree tensor product finite elements. We derive a priori error estimates in weighted Sobolev spaces. The estimates exhibit optimal regularity but suboptimal order for quasi-uniform meshes. For anisotropic meshes, instead, they are quasi-optimal in both order and regularity. We present numerical experiments to illustrate the method's performance." }
1402.1916
1302.0698
A weighted Poincaré inequality
For a star-shaped domain, and a specific A 2 -weight, we have proved a weighted Poincaré inequality #REFR Lemma 4.2] .
[ "In order to obtain interpolation error estimates in L p (ω, Ω) and W 1 p (ω, Ω), it is instrumental to have a weighted Poincaré-like inequality #OTHEREFR .", "A pioneering reference is the work by Fabes, Kenig and Serapioni #OTHEREFR , which shows that, when the domain is a ball and the weight belongs to A p 1 < p < ∞, a weighted Poincaré inequality holds [34, Theorem 1.3 and Theorem 1.5]. For generalizations of this result see #OTHEREFR 45] ." ]
[ "In this section we extend this result to a general exponent p and a general weight ω ∈ A p (R n ).", "Our proof is constructive and not based on a compactness argument.", "This allows us to trace the dependence of the stability constant on the domain geometry.", "Lemma 3.1 (weighted Poincaré inequality I).", "Let S ⊂ R n be bounded, star-shaped with respect to a ballB, with diam S ≈ 1. Let χ be a continuous function on S with" ]
[ "weighted Poincaré inequality" ]
background
{ "title": "Piecewise polynomial interpolation in Muckenhoupt weighted Sobolev spaces and applications", "abstract": "Abstract. We develop a constructive piecewise polynomial approximation theory in weighted Sobolev spaces with Muckenhoupt weights for any polynomial degree. The main ingredients to derive optimal error estimates for an averaged Taylor polynomial are a suitable weighted Poincaré inequality, a cancellation property and a simple induction argument. We also construct a quasi-interpolation operator, built on local averages over stars, which is well defined for functions in L 1 . We derive optimal error estimates for any polynomial degree on simplicial shape regular meshes. On rectangular meshes, these estimates are valid under the condition that neighboring elements have comparable size, which yields optimal anisotropic error estimates over n-rectangular domains. The interpolation theory extends to cases when the error and function regularity require different weights. We conclude with three applications: nonuniform elliptic boundary value problems, elliptic problems with singular sources, and fractional powers of elliptic operators." }
{ "title": "A PDE approach to fractional diffusion in general domains: a priori error analysis", "abstract": "Abstract. The purpose of this work is the study of solution techniques for problems involving fractional powers of symmetric coercive elliptic operators in a bounded domain with Dirichlet boundary conditions. These operators can be realized as the Dirichlet to Neumann map for a degenerate/singular elliptic problem posed on a semi-infinite cylinder, which we analyze in the framework of weighted Sobolev spaces. Motivated by the rapid decay of the solution of this problem, we propose a truncation that is suitable for numerical approximation. We discretize this truncation using first degree tensor product finite elements. We derive a priori error estimates in weighted Sobolev spaces. The estimates exhibit optimal regularity but suboptimal order for quasi-uniform meshes. For anisotropic meshes, instead, they are quasi-optimal in both order and regularity. We present numerical experiments to illustrate the method's performance." }
1307.7079
1302.0698
Introduction
At this point we would like to mention the recent results in #REFR in relation with the rate of convergence of nonlinear approximation methods observed by Dahlke and DeVore in the harmonic case.
[ "Regarding Besov regularity of harmonic functions see also #OTHEREFR .", "The paper is organized in three sections.", "In the first one we prove mean value formulas for solutions of L a u = 0 at the points on the hyperplane y = 0 of R n+1 .", "The second section is devoted to apply the result in Section 1 in order to obtain a nonlocal mean value formula for solutions of (−△) s f = 0 on domains of R n .", "Finally, in Section 3, we use the above results to obtain a Besov regularity improvement for solutions of (−△) s f = 0 in Lipschitz domains of R n ." ]
[ "The main result of this section is contained in the next statement.", "As in #OTHEREFR we shall use X to denote the points (x, y) in R n+1 with x ∈ R n and y ∈ R.", "For x ∈ D with δ(x) we shall denote the distance from x to ∂D.", ") and has compact support in the ball S((0, 0), 1). It is easy to check that ∇ψ(X) = ϕ(X)X.", "Take now x ∈ D and 0 < r < δ(x)." ]
[ "harmonic case" ]
result
{ "title": "Mean value formulas for solutions of some degenerate elliptic equations and applications", "abstract": "Abstract. We prove a mean value formula for weak solutions of div(|y| a grad u) = 0 in R n+1 = {(x, y) : x ∈ R n , y ∈ R}, −1 < a < 1 and balls centered at points of the form (x, 0). We obtain an explicit nonlocal kernel for the mean value formula for solutions of (−△) s f = 0 on a domain D of R n . When D is Lipschitz we prove a Besov type regularity improvement for the solutions of (−△) s f = 0." }
{ "title": "A PDE approach to fractional diffusion in general domains: a priori error analysis", "abstract": "Abstract. The purpose of this work is the study of solution techniques for problems involving fractional powers of symmetric coercive elliptic operators in a bounded domain with Dirichlet boundary conditions. These operators can be realized as the Dirichlet to Neumann map for a degenerate/singular elliptic problem posed on a semi-infinite cylinder, which we analyze in the framework of weighted Sobolev spaces. Motivated by the rapid decay of the solution of this problem, we propose a truncation that is suitable for numerical approximation. We discretize this truncation using first degree tensor product finite elements. We derive a priori error estimates in weighted Sobolev spaces. The estimates exhibit optimal regularity but suboptimal order for quasi-uniform meshes. For anisotropic meshes, instead, they are quasi-optimal in both order and regularity. We present numerical experiments to illustrate the method's performance." }
1507.01985
1302.0698
Notation and preliminaries. In this work Ω is a convex bounded and open subset of
For a bounded domain there are several ways, not necessarily equivalent, to define the fractional Laplacian; see #REFR for a discussion.
[ "We also define the piecewise linear interpolantŴ", "The first order backward difference operator d is defined by dW", "k+1 for all t ∈ (t k , t k+1 ) and k = 0, . . . , K−1. Finally, we also notice that, for any sequence", "The relation a b indicates that a ≤ Cb for a constant that does not depend on either a or b, but it might depend on the problem data. The value of C might change at each occurrence.", "2.1. The fractional Laplacian." ]
[ "As in #OTHEREFR we will adopt that based on spectral theory #OTHEREFR . Namely, since −∆ :", "is an unbounded, positive and closed operator with dense domain", "(Ω) and its inverse is compact, there is a countable collection of eigenpairs {λ l , ϕ l } l∈N ⊂ R + × H 1 0 (Ω) such that {ϕ l } l∈N is an orthonormal basis of L 2 (Ω) and an orthogonal basis of", "then, for any s ∈ (0, 1), we define (−∆) s w = l∈N λ s l w l ϕ l , As it is well known, the theory of Hilbert scales presented in #OTHEREFR Chapter 1]", "e., the real interpolation between L 2 (Ω) and H 1 0 (Ω)." ]
[ "fractional Laplacian" ]
background
{ "title": "Finite element approximation of the parabolic fractional obstacle problem", "abstract": "Abstract. We study a discretization technique for the parabolic fractional obstacle problem in bounded domains. The fractional Laplacian is realized as the Dirichlet-to-Neumann map for a nonuniformly elliptic equation posed on a semi-infinite cylinder, which recasts our problem as a quasi-stationary elliptic variational inequality with a dynamic boundary condition. The rapid decay of the solution suggests a truncation that is suitable for numerical approximation. We discretize the truncation with a backward Euler scheme in time and, for space, we use first-degree tensor product finite elements. We present an error analysis based on different smoothness assumptions." }
{ "title": "A PDE approach to fractional diffusion in general domains: a priori error analysis", "abstract": "Abstract. The purpose of this work is the study of solution techniques for problems involving fractional powers of symmetric coercive elliptic operators in a bounded domain with Dirichlet boundary conditions. These operators can be realized as the Dirichlet to Neumann map for a degenerate/singular elliptic problem posed on a semi-infinite cylinder, which we analyze in the framework of weighted Sobolev spaces. Motivated by the rapid decay of the solution of this problem, we propose a truncation that is suitable for numerical approximation. We discretize this truncation using first degree tensor product finite elements. We derive a priori error estimates in weighted Sobolev spaces. The estimates exhibit optimal regularity but suboptimal order for quasi-uniform meshes. For anisotropic meshes, instead, they are quasi-optimal in both order and regularity. We present numerical experiments to illustrate the method's performance." }
1707.07367
1302.0698
In #REFR the extension problem (1.2) was first used as a way to obtain a numerical technique to approximate the solution to (1.1).
[ "The so-called conormal exterior derivative of U at Ω × {0} is", "We shall refer to y as the extended variable and to the dimension d + 1 in R d+1 + the extended dimension of problem (1.2) .", "Throughout the text, points x ∈ C will be written as x = (x ′ , y) with x ′ ∈ Ω and y > 0.", "The limit in (1.3) must be understood in the distributional sense #OTHEREFR .", "With the extension U at hand, the fractional powers of L in (1.1) and the Dirichlet-to-Neumann operator of problem (1.2) are related by" ]
[ "A piecewise linear finite element method (P 1 -FEM) was proposed and analyzed.", "In this work, we extend the results of #OTHEREFR in several directions: a) In Theorem 5.9, we generalize the error analysis of #OTHEREFR , based on the localization of L s given by (1.2), to nonconvex polygonal domains Ω ⊂ R 2 , under the requirement of Lipschitz regularity in Ω for A and c, and for f ∈ H 1−s (Ω) in (2.2) ahead.", "b) In Theorem 4.7 we prove, again under Lipschitz regularity in Ω for A and c, weighted H 2 (with respect to the extended variable y) regularity estimates for the solution U of (1.2).", "We use these to propose a novel, sparse tensor product P 1 -FEM in C which is realized by invoking (in parallel) O(log N Ω ) many instances of anisotropic tensor product P 1 -FEM in C.", "We prove, in Theorem 5.12 , that, when the base of the cylinder C is a polygonal domain Ω ⊂ R 2 , this approach yields a method with O(N Ω log N Ω ) degrees of freedom realizing the (optimal) asymptotic convergence rate of N −1/2 Ω ." ]
[ "numerical technique" ]
method
{ "title": "Tensor FEM for spectral fractional diffusion", "abstract": "Abstract. We design and analyze several Finite Element Methods (FEMs) applied to the Caffarelli-Silvestre extension that localizes the fractional powers of symmetric, coercive, linear elliptic operators in bounded domains with Dirichlet boundary conditions. We consider open, bounded, polytopal but not necessarily convex domains Ω ⊂ R d with d = 1, 2. For the solution to the extension problem, we establish analytic regularity with respect to the extended variable y ∈ (0, ∞). We prove that the solution belongs to countably normed, power-exponentially weighted Bochner spaces of analytic functions with respect to y, taking values in corner-weighted Kondat'ev type Sobolev spaces in Ω. In Ω ⊂ R 2 , we discretize with continuous, piecewise linear, Lagrangian FEM (P 1 -FEM) with mesh refinement near corners, and prove that first order convergence rate is attained for compatible data f ∈ H 1−s (Ω). We also prove that tensorization of a P 1 -FEM in Ω with a suitable hp-FEM in the extended variable achieves log-linear complexity with respect to N Ω , the number of degrees of freedom in the domain Ω. In addition, we propose a novel, sparse tensor product FEM based on a multilevel P 1 -FEM in Ω and on a P 1 -FEM on radical-geometric meshes in the extended variable. We prove that this approach also achieves log-linear complexity with respect to N Ω . Finally, under the stronger assumption that the data is analytic in Ω, and without compatibility at ∂Ω, we establish exponential rates of convergence of hp-FEM for spectral, fractional diffusion operators in energy norm. This is achieved by a combined tensor product hp-FEM for the Caffarelli-Silvestre extension in the truncated cylinder Ω × (0, Y ) with anisotropic geometric meshes that are refined towards ∂Ω. We also report numerical experiments for model problems which confirm the theoretical results. We indicate several extensions and generalizations of the proposed methods to other problem classes and to other boundary conditions on ∂Ω." }
{ "title": "A PDE approach to fractional diffusion in general domains: a priori error analysis", "abstract": "Abstract. The purpose of this work is the study of solution techniques for problems involving fractional powers of symmetric coercive elliptic operators in a bounded domain with Dirichlet boundary conditions. These operators can be realized as the Dirichlet to Neumann map for a degenerate/singular elliptic problem posed on a semi-infinite cylinder, which we analyze in the framework of weighted Sobolev spaces. Motivated by the rapid decay of the solution of this problem, we propose a truncation that is suitable for numerical approximation. We discretize this truncation using first degree tensor product finite elements. We derive a priori error estimates in weighted Sobolev spaces. The estimates exhibit optimal regularity but suboptimal order for quasi-uniform meshes. For anisotropic meshes, instead, they are quasi-optimal in both order and regularity. We present numerical experiments to illustrate the method's performance." }
1707.07367
1302.0698
Remark 3.2 (complexity).
The first one, as in #REFR , is a full tensor product FEM and for it we show the first order rate of convergence in Ω, but at superlinear complexity in terms of the number N Ω of degrees of freedom in Ω.
[ "5. h-FE discretization in Ω. We now begin with the discretization of (2.11).", "The structure of this section is as follows: in Section 5.1, we introduce the FE approximation in Ω and fix notation on Finite Element spaces.", "Section 5.2 introduces the FE discretization in C in abstract form.", "Section 5.3 next addresses a basic decomposition of the FE discretization error which decomposes the FE discretization error into two parts: a semidiscretization error with respect to x ′ ∈ Ω, and a corresponding error with respect to y ∈ (0, Y ), where 0 < Y < ∞ denotes a truncation parameter of the cylinder (0, ∞).", "Section 5.4 then addresses two first order tensor product FEMs in C." ]
[ "To reduce the complexity, we propose the second, novel approach: by sparse tensor product P 1 discretization of the extended problem in C, we show the same convergence rate, but with (essentially) linear complexity in terms of N Ω requiring only marginally more regularity of the data f in Ω.", "Section 5.5 addresses the use of an hp-FEM in the extended variable y, combined with a P 1 -FEM in Ω." ]
[ "convergence" ]
background
{ "title": "Tensor FEM for spectral fractional diffusion", "abstract": "Abstract. We design and analyze several Finite Element Methods (FEMs) applied to the Caffarelli-Silvestre extension that localizes the fractional powers of symmetric, coercive, linear elliptic operators in bounded domains with Dirichlet boundary conditions. We consider open, bounded, polytopal but not necessarily convex domains Ω ⊂ R d with d = 1, 2. For the solution to the extension problem, we establish analytic regularity with respect to the extended variable y ∈ (0, ∞). We prove that the solution belongs to countably normed, power-exponentially weighted Bochner spaces of analytic functions with respect to y, taking values in corner-weighted Kondat'ev type Sobolev spaces in Ω. In Ω ⊂ R 2 , we discretize with continuous, piecewise linear, Lagrangian FEM (P 1 -FEM) with mesh refinement near corners, and prove that first order convergence rate is attained for compatible data f ∈ H 1−s (Ω). We also prove that tensorization of a P 1 -FEM in Ω with a suitable hp-FEM in the extended variable achieves log-linear complexity with respect to N Ω , the number of degrees of freedom in the domain Ω. In addition, we propose a novel, sparse tensor product FEM based on a multilevel P 1 -FEM in Ω and on a P 1 -FEM on radical-geometric meshes in the extended variable. We prove that this approach also achieves log-linear complexity with respect to N Ω . Finally, under the stronger assumption that the data is analytic in Ω, and without compatibility at ∂Ω, we establish exponential rates of convergence of hp-FEM for spectral, fractional diffusion operators in energy norm. This is achieved by a combined tensor product hp-FEM for the Caffarelli-Silvestre extension in the truncated cylinder Ω × (0, Y ) with anisotropic geometric meshes that are refined towards ∂Ω. We also report numerical experiments for model problems which confirm the theoretical results. We indicate several extensions and generalizations of the proposed methods to other problem classes and to other boundary conditions on ∂Ω." }
{ "title": "A PDE approach to fractional diffusion in general domains: a priori error analysis", "abstract": "Abstract. The purpose of this work is the study of solution techniques for problems involving fractional powers of symmetric coercive elliptic operators in a bounded domain with Dirichlet boundary conditions. These operators can be realized as the Dirichlet to Neumann map for a degenerate/singular elliptic problem posed on a semi-infinite cylinder, which we analyze in the framework of weighted Sobolev spaces. Motivated by the rapid decay of the solution of this problem, we propose a truncation that is suitable for numerical approximation. We discretize this truncation using first degree tensor product finite elements. We derive a priori error estimates in weighted Sobolev spaces. The estimates exhibit optimal regularity but suboptimal order for quasi-uniform meshes. For anisotropic meshes, instead, they are quasi-optimal in both order and regularity. We present numerical experiments to illustrate the method's performance." }
1707.07367
1302.0698
5.4.3.
The proof of Theorem 5.9 follows similar arguments to #REFR and [33, Section 4.1] and uses the stability and approximation properties (5.14) of Π β ℓ . For completeness we provide the details.
[ "where N Ω = #T ℓ β .", "Before proving Theorem 5.9, we note a corollary that follows from a simple interpolation argument.", "Corollary 5.10 (reduced regularity).", "Assume that the meshes are constructed as in Theorem 5.9 and that f ∈ H −s+σ (Ω), with σ ∈ [0, 1].", "Then we have 32) where the hidden constant also depends on σ." ]
[ "Proof of Theorem 5.9: For the given choice of k, η and Y , we denote by π 1,ℓ η,{Y } the nodal interpolation operator on the mesh (5.15), which we analyzed in Lemma 5.7.", "By Lemmas 5.1 and 5.2, and by the choice (5.29) (recall (5.6)) it suffices to bound", "Recalling that ∇ = (∇ x ′ , ∂ y ) we split the first term I into", "In view of (5.29), we immediately obtain that the conditions (5.18) and (5.22) of Lemma 5.7 are satisfied.", "We can thus, since ηs > 1, bound the term I a using Lemma 5.7, item (iii), with j = 1 and X = L 2 (Ω) and the term I b using Lemma 5.7, item (ii) with X = H 1 0 (Ω). We have thus arrived at" ]
[ "approximation properties" ]
method
{ "title": "Tensor FEM for spectral fractional diffusion", "abstract": "Abstract. We design and analyze several Finite Element Methods (FEMs) applied to the Caffarelli-Silvestre extension that localizes the fractional powers of symmetric, coercive, linear elliptic operators in bounded domains with Dirichlet boundary conditions. We consider open, bounded, polytopal but not necessarily convex domains Ω ⊂ R d with d = 1, 2. For the solution to the extension problem, we establish analytic regularity with respect to the extended variable y ∈ (0, ∞). We prove that the solution belongs to countably normed, power-exponentially weighted Bochner spaces of analytic functions with respect to y, taking values in corner-weighted Kondat'ev type Sobolev spaces in Ω. In Ω ⊂ R 2 , we discretize with continuous, piecewise linear, Lagrangian FEM (P 1 -FEM) with mesh refinement near corners, and prove that first order convergence rate is attained for compatible data f ∈ H 1−s (Ω). We also prove that tensorization of a P 1 -FEM in Ω with a suitable hp-FEM in the extended variable achieves log-linear complexity with respect to N Ω , the number of degrees of freedom in the domain Ω. In addition, we propose a novel, sparse tensor product FEM based on a multilevel P 1 -FEM in Ω and on a P 1 -FEM on radical-geometric meshes in the extended variable. We prove that this approach also achieves log-linear complexity with respect to N Ω . Finally, under the stronger assumption that the data is analytic in Ω, and without compatibility at ∂Ω, we establish exponential rates of convergence of hp-FEM for spectral, fractional diffusion operators in energy norm. This is achieved by a combined tensor product hp-FEM for the Caffarelli-Silvestre extension in the truncated cylinder Ω × (0, Y ) with anisotropic geometric meshes that are refined towards ∂Ω. We also report numerical experiments for model problems which confirm the theoretical results. We indicate several extensions and generalizations of the proposed methods to other problem classes and to other boundary conditions on ∂Ω." }
{ "title": "A PDE approach to fractional diffusion in general domains: a priori error analysis", "abstract": "Abstract. The purpose of this work is the study of solution techniques for problems involving fractional powers of symmetric coercive elliptic operators in a bounded domain with Dirichlet boundary conditions. These operators can be realized as the Dirichlet to Neumann map for a degenerate/singular elliptic problem posed on a semi-infinite cylinder, which we analyze in the framework of weighted Sobolev spaces. Motivated by the rapid decay of the solution of this problem, we propose a truncation that is suitable for numerical approximation. We discretize this truncation using first degree tensor product finite elements. We derive a priori error estimates in weighted Sobolev spaces. The estimates exhibit optimal regularity but suboptimal order for quasi-uniform meshes. For anisotropic meshes, instead, they are quasi-optimal in both order and regularity. We present numerical experiments to illustrate the method's performance." }
1707.07367
1302.0698
8.1.
This result is analogous to the bounds obtained in #REFR for convex domains Ω, thus generalizing these results to nonconvex, polygonal domains Ω ⊂ R 2 .
[ "Note that the change in the slope (from 1/2 to 1) near the boundary is a numerical artifact -as the approximation is improved, the kink moves to the left. the Caffarelli-Silvestre extension of (1.1) from Ω to C. Our main contributions are the following.", "• General operators and nonconvex domains.", "We proposed a tensor product argument for continuous, piecewise linear FEM in both (0, ∞), and in Ω with proper mesh refinement towards y = 0 and the corners c of Ω.", "Assuming that A and c are as in Proposition 5.3, we showed that the approximate solution to problem (1.1) exhibits a near optimal asymptotic convergence rate O(h Ω | log h Ω |) subject to the optimal regularity f ∈ H 1−s (Ω).", "However, if N Ω denotes the number of degrees of freedom in the discretization in Ω, then the total number of degrees of freedom grows asymptotically as O(N 3/2 Ω ) (ignoring logarithmic factors)." ]
[ "The error analysis proceeded by a suitable form of quasi-optimality in Lemma 5.1 and the construction of a tensor product FEM interpolant in the truncated cylinder C Y .", "This interpolant was constructed from a nodal, continuous and piecewise linear interpolant π 1,ℓ η with respect to the extended variable y ∈ (0, Y ) on a radicalgeometric mesh, and from an L 2 (Ω) projection Π ℓ β in Ω onto the space of continuous, piecewise linears on a suitable sequence {T ℓ β } ℓ≥0 of regular nested, bisection-tree, simplicial meshes with refinement towards the corners c of Ω.", "A novel result from #OTHEREFR implies that Π ℓ β is also uniformly H 1 (Ω)-stable with respect to the refinement level ℓ.", "The present construction would likewise work with any other concurrently L 2 (Ω) and H 1 (Ω) stable family of quasi-interpolation operators, e.g. those of #OTHEREFR .", "• Sparse tensor grids." ]
[ "convex domains Ω" ]
result
{ "title": "Tensor FEM for spectral fractional diffusion", "abstract": "Abstract. We design and analyze several Finite Element Methods (FEMs) applied to the Caffarelli-Silvestre extension that localizes the fractional powers of symmetric, coercive, linear elliptic operators in bounded domains with Dirichlet boundary conditions. We consider open, bounded, polytopal but not necessarily convex domains Ω ⊂ R d with d = 1, 2. For the solution to the extension problem, we establish analytic regularity with respect to the extended variable y ∈ (0, ∞). We prove that the solution belongs to countably normed, power-exponentially weighted Bochner spaces of analytic functions with respect to y, taking values in corner-weighted Kondat'ev type Sobolev spaces in Ω. In Ω ⊂ R 2 , we discretize with continuous, piecewise linear, Lagrangian FEM (P 1 -FEM) with mesh refinement near corners, and prove that first order convergence rate is attained for compatible data f ∈ H 1−s (Ω). We also prove that tensorization of a P 1 -FEM in Ω with a suitable hp-FEM in the extended variable achieves log-linear complexity with respect to N Ω , the number of degrees of freedom in the domain Ω. In addition, we propose a novel, sparse tensor product FEM based on a multilevel P 1 -FEM in Ω and on a P 1 -FEM on radical-geometric meshes in the extended variable. We prove that this approach also achieves log-linear complexity with respect to N Ω . Finally, under the stronger assumption that the data is analytic in Ω, and without compatibility at ∂Ω, we establish exponential rates of convergence of hp-FEM for spectral, fractional diffusion operators in energy norm. This is achieved by a combined tensor product hp-FEM for the Caffarelli-Silvestre extension in the truncated cylinder Ω × (0, Y ) with anisotropic geometric meshes that are refined towards ∂Ω. We also report numerical experiments for model problems which confirm the theoretical results. We indicate several extensions and generalizations of the proposed methods to other problem classes and to other boundary conditions on ∂Ω." }
{ "title": "A PDE approach to fractional diffusion in general domains: a priori error analysis", "abstract": "Abstract. The purpose of this work is the study of solution techniques for problems involving fractional powers of symmetric coercive elliptic operators in a bounded domain with Dirichlet boundary conditions. These operators can be realized as the Dirichlet to Neumann map for a degenerate/singular elliptic problem posed on a semi-infinite cylinder, which we analyze in the framework of weighted Sobolev spaces. Motivated by the rapid decay of the solution of this problem, we propose a truncation that is suitable for numerical approximation. We discretize this truncation using first degree tensor product finite elements. We derive a priori error estimates in weighted Sobolev spaces. The estimates exhibit optimal regularity but suboptimal order for quasi-uniform meshes. For anisotropic meshes, instead, they are quasi-optimal in both order and regularity. We present numerical experiments to illustrate the method's performance." }
1307.2474
1302.0698
Introduction
The numerical analysis of the elliptic PDE (−∆) s u = f in a bounded domain with zero boundary data via the extension method has been recently studied by Nochetto and collaborators using finite elements, #REFR .
[ "Previous works dealing with the numerical analysis of nonlocal equations of this type are due to Cifani, Jakobsen, and Karlsen in #OTHEREFR , #OTHEREFR , #OTHEREFR .", "In particular, they formulate some convergent numerical methods for entropy and viscosity solutions.", "One of the main differences of our work is that we do not directly deal with the integral formulation of the fractional Laplacian; instead of this, we pass through the CaffarelliSilvestre extension #OTHEREFR , which replaces the calculation of the singular integral (1.2) by the calculation of a convenient extension in one more space dimension.", "While in the case σ = 1 the extension of u m (x, t) is just a function w(x, y, t) which is harmonic in (x, y) for every t fixed, in the cases σ = 1 the extension is a so-called σ-harmonic function, i.", "e., the solution of an elliptic equation with a weight that is either degenerate or singular at y = 0." ]
[ "The paper is organized as follows.", "In Section 2 we give a brief description of the problem we are concerned with.", "We present an equivalent way of expressing the problem avoiding the nonlocal operator formulation.", "For numerical reasons it is convenient to start by posing the problem in a bounded domain.", "In Section 3 we propose a two-points approximation formula for the weighted derivative that appears in the extension formulation, as well as a proof of the order of convergence; besides, a numerical experiment is given for this approximation." ]
[ "elliptic PDE" ]
method
{ "title": "Finite difference method for a general fractional porous medium equation", "abstract": "We formulate a numerical method to solve the porous medium type equation with fractional diffusion ∂u ∂t posed for x ∈ R N , t > 0, with m ≥ 1, σ ∈ (0, 2), and nonnegative initial data u(x, 0). We prove existence and uniqueness of the solution of the numerical method and also the convergence to the theoretical solution of the equation with an order depending on σ. We also propose a two points approximation to a σ-derivative with order O(h 2−σ )." }
{ "title": "A PDE approach to fractional diffusion in general domains: a priori error analysis", "abstract": "Abstract. The purpose of this work is the study of solution techniques for problems involving fractional powers of symmetric coercive elliptic operators in a bounded domain with Dirichlet boundary conditions. These operators can be realized as the Dirichlet to Neumann map for a degenerate/singular elliptic problem posed on a semi-infinite cylinder, which we analyze in the framework of weighted Sobolev spaces. Motivated by the rapid decay of the solution of this problem, we propose a truncation that is suitable for numerical approximation. We discretize this truncation using first degree tensor product finite elements. We derive a priori error estimates in weighted Sobolev spaces. The estimates exhibit optimal regularity but suboptimal order for quasi-uniform meshes. For anisotropic meshes, instead, they are quasi-optimal in both order and regularity. We present numerical experiments to illustrate the method's performance." }
1508.02807
1302.0698
A finite element method for the state equation
It is instructive to review the results of #REFR , which assume that Ω is a convex polytopal subset of R n (n ≥ 1) with boundary ∂Ω.
[ "In the next section we will propose a fully discrete scheme to approximate the solution to the optimal control problem (1.2)-(1.4) .", "The analysis relies, first, on the localization results of Section 3, and second, on finite element approximation techniques for solving (4.1) on curved domains; the latter being an extension of the priori error analysis developed in #OTHEREFR .", "We comment that such an analysis is not trivial, since involves anisotropic meshes in the extended dimension and the nonuniform weight y α (α = 1 − 2s ∈ (−1, 1)), which degenerates (s < 1/2) or blows up (s > 1/2)." ]
[ "To do this, we start by recalling the regularity properties of U and v, solutions to (3.2) and (4.1), respectively.", "The second order regularity of U is much worse in the extended direction. In fact #OTHEREFR Theorem 2.7] (see #OTHEREFR Remark 25]", "with β > 2α + 1.", "These regularity estimates have important consequences in the design of efficient numerical techniques to solve (4.1); they suggest that graded meshes in the extended (n+1)-dimension must be used.", "We recall the construction of the family of meshes {T Y } over C Y used in #OTHEREFR ." ]
[ "boundary ∂Ω." ]
background
{ "title": "A piecewise linear FEM for an optimal control problem of fractional operators: error analysis on curved domains", "abstract": "Abstract. We propose and analyze a new discretization technique for a linearquadratic optimal control problem involving the fractional powers of a symmetric and uniformly elliptic second oder operator; control constraints are considered. Since these fractional operators can be realized as the Dirichlet-toNeumann map for a nonuniformly elliptic equation, we recast our problem as a nonuniformly elliptic optimal control problem. The rapid decay of the solution to this problem suggests a truncation that is suitable for numerical approximation. We propose a fully discrete scheme that is based on piecewise linear functions on quasi-uniform meshes to approximate the optimal control and first-degree tensor product functions on anisotropic meshes for the optimal state variable. We provide an a priori error analysis that relies on derived Hölder and Sobolev regularity estimates for the optimal variables and error estimates for an scheme that approximates fractional diffusion on curved domains; the latter being an extension of previous available results. The analysis is valid in any dimension. We conclude by presenting some numerical experiments that validate the derived error estimates." }
{ "title": "A PDE approach to fractional diffusion in general domains: a priori error analysis", "abstract": "Abstract. The purpose of this work is the study of solution techniques for problems involving fractional powers of symmetric coercive elliptic operators in a bounded domain with Dirichlet boundary conditions. These operators can be realized as the Dirichlet to Neumann map for a degenerate/singular elliptic problem posed on a semi-infinite cylinder, which we analyze in the framework of weighted Sobolev spaces. Motivated by the rapid decay of the solution of this problem, we propose a truncation that is suitable for numerical approximation. We discretize this truncation using first degree tensor product finite elements. We derive a priori error estimates in weighted Sobolev spaces. The estimates exhibit optimal regularity but suboptimal order for quasi-uniform meshes. For anisotropic meshes, instead, they are quasi-optimal in both order and regularity. We present numerical experiments to illustrate the method's performance." }
1808.00584
1302.0698
Numerical results
Here, the values of y + and of M are determined as linear functions of log 10 of the number of elements in T Ω to control the error resulting from the truncation of the y-domain #REFR .
[ "Our test problem will be (2), either with µ = s, or with µ = (s, ν) for a one-dimensional parameter ν.", "In all cases the spatial domain Ω is a rectangular two-dimensional set:", "We use 5, 000 elements on Ω to form a triangulation T Ω , and choose M = M FE := 158 in #OTHEREFR to define the partition I y + of [0, y + ].", "The grading parameter γ is chosen as γ = 6 when s ≤ 1 2 , and γ = 2 otherwise.", "We therefore have that the dimension of the truth approximation is dim X h = N = 413, 559." ]
[ "In our computations, this results in y + = 2.233." ]
[ "truncation" ]
method
{ "title": "Certified reduced basis methods for fractional Laplace equations via extension", "abstract": "Fractional Laplace equations are becoming important tools for mathematical modeling and prediction. Recent years have shown much progress in developing accurate and robust algorithms to numerically solve such problems, yet most solvers for fractional problems are computationally expensive. Practitioners are often interested in choosing the fractional exponent of the mathematical model to match experimental and/or observational data; this requires the computational solution to the fractional equation for several values of the both exponent and other parameters that enter the model, which is a computationally expensive many-query problem. To address this difficulty, we present a model order reduction strategy for fractional Laplace problems utilizing the reduced basis method (RBM). Our RBM algorithm for this fractional partial differential equation (PDE) allows us to accomplish significant acceleration compared to a traditional PDE solver while maintaining accuracy. Our numerical results demonstrate this accuracy and efficiency of our RBM algorithm on fractional Laplace problems in two spatial dimensions." }
{ "title": "A PDE approach to fractional diffusion in general domains: a priori error analysis", "abstract": "Abstract. The purpose of this work is the study of solution techniques for problems involving fractional powers of symmetric coercive elliptic operators in a bounded domain with Dirichlet boundary conditions. These operators can be realized as the Dirichlet to Neumann map for a degenerate/singular elliptic problem posed on a semi-infinite cylinder, which we analyze in the framework of weighted Sobolev spaces. Motivated by the rapid decay of the solution of this problem, we propose a truncation that is suitable for numerical approximation. We discretize this truncation using first degree tensor product finite elements. We derive a priori error estimates in weighted Sobolev spaces. The estimates exhibit optimal regularity but suboptimal order for quasi-uniform meshes. For anisotropic meshes, instead, they are quasi-optimal in both order and regularity. We present numerical experiments to illustrate the method's performance." }
1606.04912
1302.0698
R d
This result was then utilized in #REFR in the numerical approximation of the fractional Laplacian (−∆) s defined in (1.4) , by solving the integer-order equation on Ω×(0, ∞) via graded meshes in the extended variable.
[ "As the sample paths of a Lévy process admit jumps of arbitrary lengths, the boundary data must be imposed on the entire complement Ω c of the domain Ω.", "On the other hand, for the Laplacian equation ((1.3) with s = 1), the underlying stochastic process is a Brownian motion that has continuous sample paths that intersect the boundary ∂Ω of the domain Ω almost surely.", "Hence, the boundary condition needs only be specified at the boundary ∂Ω.", "Alternatively, let {λ n , ψ n } ∞ n=1 be the set of eigenvalues and (L 2 orthogonal and) normalized eigenfunctions of the Laplace operator in Ω with the homogeneous Dirichlet boundary condition on ∂Ω.", "In #OTHEREFR (−∆) s defined in (1.4) was extended to a integer-order partial differential equation posed on Ω × (0, ∞) by generalizing the result in #OTHEREFR ." ]
[ "An alternative numerical discretization of the fractional Laplacian defined by (1.4) was presented in #OTHEREFR via a discrete version of the spectral decomposition of (1.4) .", "The constitutive models in peridynamics depend on finite deformation vectors, instead of deformation gradients in classical constitutive models #OTHEREFR .", "Consequently, peridynamic models yield nonlocal mathematical formulations that are based on longrange interactions and present more appropriate representation of discontinuities in displacement fields and the description of cracks and their evolution in materials than classical continuum solid mechanics that are based on local interactions.", "For instance, a bond-based linear peridynamic model takes the form #OTHEREFR C(d, δ, k)", "(x − y) ⊗ (x − y) |x − y| 3 u(x) − u(y) dy = f (x), x ∈ Ω, u(x) = g(x), x ∈ Ω δ ." ]
[ "fractional Laplacian" ]
method
{ "title": "Wellposedness and regularity of steady-state two-sided variable-coefficient conservative space-fractional diffusion equations", "abstract": "Abstract. We study the Dirichlet boundary-value problem of steady-state two-sided variablecoefficient conservative space-fractional diffusion equations. We show that the Galerkin weak formulation, which was proved to be coercive and continuous for a constant-coefficient analogue of the problem, loses its coercivity. We characterize the solution to the variable-coefficient problem in terms of the solutions of second-order diffusion equations along with a two-sided fractional integral equation. We then derive a Petrov-Galerkin formulation for this problem and prove that the weak formulation is weakly coercive and so the problem is well posed. We then prove high-order regularity estimates of the true solution in a properly chosen norm of Riemann-Liouville derivatives. Key words. two-sided variable-coefficient fractional diffusion equation, Petrov-Galerkin formulation, weak coercivity, wellposedness AMS subject classifications. 65M25,65M60,65Z05,76M10,76M25,80A10,80A30 1. Introduction. In recent years nonlocal models are emerging as powerful tools for modeling challenging phenomena including overlapping microscopic and macroscopic scales, anomalous transport, and long-range time memory or spatial interactions in nature, science, social science, and engineering [24, 25, 26, 43] . Data-driven fractional-order differential operators can be constructed to model a specific phenomenon instead of the current practice of tweaking the coefficients that multiply pre-set integer-order differential operators. It was shown that the misspecification of physical models using an integer-order partial differential equation often leads to a variable coecient fit (struggling to fit the data at each location, for example) whereas a physical model using a fractional-order partial differential equation can fit all the data with a constant coefficient [5] . In short, nonlocal models open up great opportunities and flexibility for modeling and simulation of multiphysical phenomena, e.g. from local to nonlocal dynamics [43] . Because of their significantly improved modeling capabilities, various related but different nonlocal models, including fractional Laplacian, nonlocal diffusion and peridynamics, and fractional partial differential equations, have been developed to describe diverse nonlocal phenomena. The fractional Laplacian operator (−∆u) s of order 0 < s < 1 has been used to model nonlocal behavior in many physical problems [2, 4, 12, 21] and has appeared as the infinitesimal generator of a stable Lévy process [2, 17, 18, 38] . (−∆) s can be defined as a pseudodifferential operator of symbol |ξ| 2s on the entire space R d [2]" }
{ "title": "A PDE approach to fractional diffusion in general domains: a priori error analysis", "abstract": "Abstract. The purpose of this work is the study of solution techniques for problems involving fractional powers of symmetric coercive elliptic operators in a bounded domain with Dirichlet boundary conditions. These operators can be realized as the Dirichlet to Neumann map for a degenerate/singular elliptic problem posed on a semi-infinite cylinder, which we analyze in the framework of weighted Sobolev spaces. Motivated by the rapid decay of the solution of this problem, we propose a truncation that is suitable for numerical approximation. We discretize this truncation using first degree tensor product finite elements. We derive a priori error estimates in weighted Sobolev spaces. The estimates exhibit optimal regularity but suboptimal order for quasi-uniform meshes. For anisotropic meshes, instead, they are quasi-optimal in both order and regularity. We present numerical experiments to illustrate the method's performance." }
1409.7721
1302.0698
Introduction
Finally, we mention that finite element approximations for the fractional problem (1.2) were studied in #REFR by using the extension problem.
[ "It is known that there is a Markov process Y t having as generator the fractional power (−∆ D ) s of the Dirichlet Laplacian −∆ D on Ω.", "Indeed, we first kill the Wiener process X t at τ Ω , the first exit time of X t from Ω, and then we subordinate the killed Wiener process with an s-stable subordinator T t .", "Hence Y t = X Tt is the desired process, see for example #OTHEREFR and references therein.", "For a semilinear problem involving the fractional Dirichlet Laplacian see #OTHEREFR and references therein.", "By considering nonlocal chemical diffusion in the Keller-Segel model one is led to a semilinear problem for the fractional Neumann Laplacian, see #OTHEREFR ." ]
[ "We now present the interior regularity estimates.", "Theorem 1.1 (Interior regularity for f in C α ).", "Assume that Ω is a bounded Lipschitz domain and that f ∈ C 0,α (Ω), for some 0 < α < 1. Let u be a solution to (1.2) or (1.6).", "(1) Suppose that 0 < α + 2s < 1 and that A(x) is continuous in Ω. Then u ∈ C 0,α+2s (Ω) and", "(2) Suppose that 1 < α + 2s < 2 and that A(x) is in C 0,α+2s−1 (Ω). Then u ∈ C 1,α+2s−1 (Ω) and" ]
[ "fractional problem", "finite element approximations" ]
method
{ "title": "Fractional elliptic equations, Caccioppoli estimates and regularity", "abstract": "Abstract. Let L = − divx(A(x)∇x) be a uniformly elliptic operator in divergence form in a bounded domain Ω. We consider the fractional nonlocal equations on ∂Ω, and Here L s , 0 < s < 1, is the fractional power of L and ∂ A u is the conormal derivative of u with respect to the coefficients A(x). We reproduce Caccioppoli type estimates that allow us to develop the regularity theory. Indeed, we prove interior and boundary Schauder regularity estimates depending on the smoothness of the coefficients A(x), the right hand side f and the boundary of the domain. Moreover, we establish estimates for fundamental solutions in the spirit of the classical result by Littman-Stampacchia-Weinberger and we obtain nonlocal integro-differential formulas for L s u(x). Essential tools in the analysis are the semigroup language approach and the extension problem." }
{ "title": "A PDE approach to fractional diffusion in general domains: a priori error analysis", "abstract": "Abstract. The purpose of this work is the study of solution techniques for problems involving fractional powers of symmetric coercive elliptic operators in a bounded domain with Dirichlet boundary conditions. These operators can be realized as the Dirichlet to Neumann map for a degenerate/singular elliptic problem posed on a semi-infinite cylinder, which we analyze in the framework of weighted Sobolev spaces. Motivated by the rapid decay of the solution of this problem, we propose a truncation that is suitable for numerical approximation. We discretize this truncation using first degree tensor product finite elements. We derive a priori error estimates in weighted Sobolev spaces. The estimates exhibit optimal regularity but suboptimal order for quasi-uniform meshes. For anisotropic meshes, instead, they are quasi-optimal in both order and regularity. We present numerical experiments to illustrate the method's performance." }
1507.08970
1302.0698
This localization was exploited by Nochetto, Otárola and Salgado in #REFR , where the authors study the numerical approximation of the spectral fractional Laplacian by considering graded meshes in the extended variable. 2.
[ "1.", "One first possibility is to consider fractional powers of the Dirichlet Laplace operator in the sense of spectral theory.", "Indeed, let {ψ k , λ k } k∈N ⊂ H 1 0 (Ω) × R + be the set of normalized eigenfunctions and eigenvalues for the Laplace operator in Ω with homogeneous Dirichlet boundary conditions, so that {ψ k } k∈N is an orthonormal basis of L 2 (Ω) and", "Then, the spectral fractional Laplacian (−∆) and can be subsequently extended by density to the Hilbert space H s (Ω).", "In #OTHEREFR , the Caffarelli-Silvestre result was proved for this operator, thus achieving a local problem posed on a semi-infinite cylinder Ω× (0, ∞)." ]
[ "A second feasible definition is attained by considering the integral formulation (1.2), and restricting it to functions supported in Ω.", "This gives rise to the integral fractional Laplacian (−∆) s I u.", "This operator is different to the spectral fractional Laplacian; for example, their difference is positive definite and positivity preserving #OTHEREFR .", "See also #OTHEREFR , where the spectra of these operators are compared.", "The main difficulties to overcome when dealing with numerical analysis of this integral fractional Laplacian are associated to its nonlocality and to the singularity at x = y of the kernel it involves. 3." ]
[ "spectral fractional" ]
background
{ "title": "A fractional Laplace equation: regularity of solutions and Finite Element approximations", "abstract": "Abstract. This paper deals with the integral version of the Dirichlet homogeneous fractional Laplace equation. For this problem weighted and fractional Sobolev a priori estimates are provided in terms of the Hölder regularity of the data. By relying on these results, optimal order of convergence for the standard linear finite element method is proved for quasi-uniform as well as graded meshes. Some numerical examples are given showing results in agreement with the theoretical predictions." }
{ "title": "A PDE approach to fractional diffusion in general domains: a priori error analysis", "abstract": "Abstract. The purpose of this work is the study of solution techniques for problems involving fractional powers of symmetric coercive elliptic operators in a bounded domain with Dirichlet boundary conditions. These operators can be realized as the Dirichlet to Neumann map for a degenerate/singular elliptic problem posed on a semi-infinite cylinder, which we analyze in the framework of weighted Sobolev spaces. Motivated by the rapid decay of the solution of this problem, we propose a truncation that is suitable for numerical approximation. We discretize this truncation using first degree tensor product finite elements. We derive a priori error estimates in weighted Sobolev spaces. The estimates exhibit optimal regularity but suboptimal order for quasi-uniform meshes. For anisotropic meshes, instead, they are quasi-optimal in both order and regularity. We present numerical experiments to illustrate the method's performance." }
1709.00730
1302.0698
Sobolev Spaces and Inequalities
First, we recall the results and notation of fractional and weighted Poincaré inequalities presented in #REFR and references therein.
[ "In this section we will introduce the notation of fractional and weighted Sobolev spaces." ]
[ "We also present and prove some useful inverse and trace inequalities in the weighted Sobolev space, thus linking the two kinds of Sobolev spaces." ]
[ "fractional" ]
background
{ "title": "Numerical Homogenization of Heterogeneous Fractional Laplacians", "abstract": "In this paper, we develop a numerical multiscale method to solve the fractional Laplacian with a heterogeneous diffusion coefficient. When the coefficient is heterogeneous, this adds to the computational costs. Moreover, the fractional Laplacian is a nonlocal operator in its standard form, however the Caffarelli-Silvestre extension allows for a localization of the equations. This adds a complexity of an extra spacial dimension and a singular/degenerate coefficient depending on the fractional order. Using a sub-grid correction method, we correct the basis functions in a natural weighted Sobolev space and show that these corrections are able to be truncated to design a computationally efficient scheme with optimal convergence rates. A key ingredient of this method is the use of quasi-interpolation operators to construct the fine scale spaces. Since the solution of the extended problem on the critical boundary is of main interest, we construct a projective quasi-interpolation that has both d and d + 1 dimensional averages over subsets in the spirit of the Scott-Zhang operator. We show that this operator satisfies local stability and local approximation properties in weighted Sobolev spaces. We further show that we can obtain a greater rate of convergence for sufficient smooth forces, and utilizing a global L 2 projection on the critical boundary. We present some numerical examples, utilizing our projective quasi-interpolation in dimension 2 + 1 for analytic and heterogeneous cases to demonstrate the rates and effectiveness of the method." }
{ "title": "A PDE approach to fractional diffusion in general domains: a priori error analysis", "abstract": "Abstract. The purpose of this work is the study of solution techniques for problems involving fractional powers of symmetric coercive elliptic operators in a bounded domain with Dirichlet boundary conditions. These operators can be realized as the Dirichlet to Neumann map for a degenerate/singular elliptic problem posed on a semi-infinite cylinder, which we analyze in the framework of weighted Sobolev spaces. Motivated by the rapid decay of the solution of this problem, we propose a truncation that is suitable for numerical approximation. We discretize this truncation using first degree tensor product finite elements. We derive a priori error estimates in weighted Sobolev spaces. The estimates exhibit optimal regularity but suboptimal order for quasi-uniform meshes. For anisotropic meshes, instead, they are quasi-optimal in both order and regularity. We present numerical experiments to illustrate the method's performance." }
1603.08989
1302.0698
The use of the aforementioned localization techniques for the numerical treatment of problem (1.3) followed not so long after #REFR .
[ "The parameter α is defined as α = 1 − 2s ∈ (−1, 1) and the conormal exterior derivative of U at Ω × {0} is", "We call y the extended variable and call the dimension n + 1 in R n+1 + the extended dimension of problem (1.5).", "The limit in (1.6) must be understood in the distributional sense; see #OTHEREFR .", "With these elements at hand, we then write the fundamental result by L. Caffarelli and L.", "Silvestre #OTHEREFR : the fractional Laplacian and the Dirichlet-to-Neumann map of problem (1.5) are related by d s (−∆) s u = ∂ ν α U in Ω." ]
[ "In this reference, the authors propose the following technique to solve problem (1.3): given z, solve (1.5), thus obtaining a function U ; setting u(x ) = U (x , 0), the solution to (1.3) is obtained.", "The implementation of this scheme uses standard components of finite element analysis, while its analysis combines asymptotic properties of Bessel functions #OTHEREFR , elements of harmonic analysis #OTHEREFR and a polynomial interpolation theory on weighted spaces #OTHEREFR .", "The latter is valid for tensor product elements that exhibit a large aspect ratio in y (anisotropy), which is necessary to fit the behavior of U (x , y) with x ∈ Ω and y > 0.", "The main advantage of this scheme is that it solves the local problem (1.5) instead of dealing with (−∆) s in (1.3).", "However, this comes at the expense of incorporating one more dimension to the problem; issue that has been resolved to some extent with the design of fast solvers #OTHEREFR and adaptive finite element methods (AFEMs) #OTHEREFR ." ]
[ "numerical treatment" ]
method
{ "title": "An a posteriori error analysis for an optimal control problem involving the fractional Laplacian", "abstract": "Abstract. In a previous work, we introduced a discretization scheme for a constrained optimal control problem involving the fractional Laplacian. For such a control problem, we derived optimal a priori error estimates that demand the convexity of the domain and some compatibility conditions on the data. To relax such restrictions, in this paper, we introduce and analyze an efficient and, under certain assumptions, reliable a posteriori error estimator. We realize the fractional Laplacian as the Dirichlet-to-Neumann map for a nonuniformly elliptic problem posed on a semi-infinite cylinder in one more spatial dimension. This extra dimension further motivates the design of an posteriori error indicator. The latter is defined as the sum of three contributions, which come from the discretization of the state and adjoint equations and the control variable. The indicator for the state and adjoint equations relies on an anisotropic error estimator in Muckenhoupt weighted Sobolev spaces. The analysis is valid in any dimension. On the basis of the devised a posteriori error estimator, we design a simple adaptive strategy that exhibits optimal experimental rates of convergence. Key words. linear-quadratic optimal control problem, fractional diffusion, nonlocal operators, a posteriori error estimates, anisotropic estimates, adaptive algorithm." }
{ "title": "A PDE approach to fractional diffusion in general domains: a priori error analysis", "abstract": "Abstract. The purpose of this work is the study of solution techniques for problems involving fractional powers of symmetric coercive elliptic operators in a bounded domain with Dirichlet boundary conditions. These operators can be realized as the Dirichlet to Neumann map for a degenerate/singular elliptic problem posed on a semi-infinite cylinder, which we analyze in the framework of weighted Sobolev spaces. Motivated by the rapid decay of the solution of this problem, we propose a truncation that is suitable for numerical approximation. We discretize this truncation using first degree tensor product finite elements. We derive a priori error estimates in weighted Sobolev spaces. The estimates exhibit optimal regularity but suboptimal order for quasi-uniform meshes. For anisotropic meshes, instead, they are quasi-optimal in both order and regularity. We present numerical experiments to illustrate the method's performance." }
1911.08541
1903.00107
I. INTRODUCTION
To alleviate the pattern artifacts, dark channel prior was incorporated into loss function and the residual nets of the DeblurGAN was replaced with the light-weighted U-net #REFR .
[ "They trained neural networks to predict motion blur kernel in spatial or frequency domain and restored the sharp image by time-consuming deconvolution. Recently, the end-toend deblurring networks have drawn much attention.", "These techniques produce the latent sharp image from a blurry one in one pass without explicitly estimating blur kernels, thus intrinsically avoiding artifacts and distortions caused by erroneous kernels.", "A multi-scale network #OTHEREFR was built by transferring the traditional coarse-to-fine scheme to CNN.", "Its performance is significantly improved in SRN #OTHEREFR that embedded Recurrent Neural Networks (RNN) into the multiscale structure.", "The development of Conditional Generative Adversarial Nets (CGAN) inspired another type of deep deblurring network called DeblurGAN #OTHEREFR ." ]
[ "However, the poor performance in challenging applications is still an issue in those networks.", "For example, the multi-scale network proposed by SRN #OTHEREFR fails to handle the large blur kernel as shown in Fig. 1 .", "Although SRN adopted three scales of neural networks, it is not enough for such large blur kernel.", "The design of three scales is a trade-off in terms of network size and capability to remove significant blur effects. Similar performance can also be seen in GAN-based networks.", "Observing that the noisy sharp image is a very good initial approximation to the latent clean image, we propose to use the noisy/blurry image pair captured in a burst as the input to the network." ]
[ "residual nets" ]
method
{ "title": "Deep Motion Blur Removal Using Noisy/Blurry Image Pairs", "abstract": "Removing spatially variant motion blur from a blurry image is a challenging problem as blur sources are complicated and difficult to model accurately. Recent progress in deep neural networks suggests that kernel free single image deblurring can be efficiently performed, but questions about deblurring performance persist. Thus, we propose to restore a sharp image by fusing a pair of noisy/blurry images captured in a burst. Two neural network structures, DeblurRNN and DeblurMerger, are presented to exploit the pair of images in a sequential manner or parallel manner. To boost the training, gradient loss, adversarial loss and spectral normalization are leveraged. The training dataset that consists of pairs of noisy/blurry images and the corresponding ground truth sharp image is synthesized based on the benchmark dataset GOPRO. We evaluated the trained networks on a variety of synthetic datasets and real image pairs. The results demonstrate that the proposed approach outperforms the state-of-the-art both qualitatively and quantitatively." }
{ "title": "GAN Based Image Deblurring Using Dark Channel Prior", "abstract": "A conditional general adversarial network (GAN) is proposed for image deblurring problem. It is tailored for image deblurring instead of just applying GAN on the deblurring problem. Motivated by that, dark channel prior is carefully picked to be incorporated into the loss function for network training. To make it more compatible with neuron networks, its original indifferentiable form is discarded and L2 norm is adopted instead. On both synthetic datasets and noisy natural images, the proposed network shows improved deblurring performance and robustness to image noise qualitatively and quantitatively. Additionally, compared to the existing end-to-end deblurring networks, our network structure is light-weight, which ensures less training and testing time." }
1912.03366
1602.03686
Introduction
Prediction based embedding approaches #REFR Choi et al., 2016a) work well on structured modality where each patient can be represented as a sequence of visits of codes, and can consider context in terms of the other neighboring codes within the same visit.
[ "However, trying to fuse infor-mation from different modalities in EHR presents the following obstacles for representation learning, 1. Inconsistency in medical concept terminology.", "In structured clinical events, the medical concept is represented with ICD-9/ICD-10 clinical code.", "While in unstructured clinical notes, it is either mentioned with a formal medical term for the concept or an informal analogous term/phrase implicitly.", "It is difficult to consistently detect the presence of a medical concept across different modalities.", "2. Varying contexts." ]
[ "However, for unstructured clinical notes, the context can be noisy due to the presence of text describing all aspects of a patient's admission (e.g., past medical history).", "3. Feature Associations Complexity.", "Other types of patient information such as demographics and laboratory results are possibly important signals in prediction tasks.", "However, modeling their nonlinear and complex relations with the medical concepts is not straightforward.", "4. Interpretability." ]
[ "structured modality", "patient" ]
background
{ "title": "Med2Meta: Learning Representations of Medical Concepts with Meta-Embeddings", "abstract": "Distributed representations of medical concepts have been used to support downstream clinical tasks recently. Electronic Health Records (EHR) capture different aspects of patients' hospital encounters and serve as a rich source for augmenting clinical decision making by learning robust medical concept embeddings. However, the same medical concept can be recorded in different modalities (e.g., clinical notes, lab results) -with each capturing salient information unique to that modality -and a holistic representation calls for relevant feature ensemble from all information sources. We hypothesize that representations learned from heterogeneous data types would lead to performance enhancement on various clinical informatics and predictive modeling tasks. To this end, our proposed approach makes use of meta-embeddings, embeddings aggregated from learned embeddings. Firstly, modality-specific embeddings for each medical concept is learned with graph autoencoders. The ensemble of all the embeddings is then modeled as a meta-embedding learning problem to incorporate their correlating and complementary information through a joint reconstruction. Empirical results of our model on both quantitative and qualitative clinical evaluations have shown improvements over state-ofthe-art embedding models, thus validating our hypothesis." }
{ "title": "Medical Concept Representation Learning from Electronic Health Records and its Application on Heart Failure Prediction", "abstract": "Objective: To transform heterogeneous clinical data from electronic health records into clinically meaningful constructed features using data driven method that rely, in part, on temporal relations among data. The clinically meaningful representations of medical concepts and patients are the key for health analytic applications. Most of existing approaches directly construct features mapped to raw data (e.g., ICD or CPT codes), or utilize some ontology mapping such as SNOMED codes. However, none of the existing approaches leverage EHR data directly for learning such concept representation. We propose a new way to represent heterogeneous medical concepts (e.g., diagnoses, medications and procedures) based on co-occurrence patterns in longitudinal electronic health records. The intuition behind the method is to map medical concepts that are co-occuring closely in time to similar concept vectors so that their distance will be small. We also derive a simple method to construct patient vectors from the related medical concept vectors. For qualitative evaluation, we study similar medical concepts across diagnosis, medication and procedure. In quantitative evaluation, our proposed representation significantly improves the predictive modeling performance for onset of heart failure (HF), where classification methods (e.g. logistic regression, neural network, support vector machine and K-nearest neighbors) achieve up to 23% improvement in area under the ROC curve (AUC) using this proposed representation. We proposed an effective method for patient and medical concept representation learning. The resulting representation can map relevant concepts together and also improves predictive modeling performance." }
2001.05295
1602.03686
Representation Learning For Text
Choi et al #REFR takes the next step and uses the code vectors to predict heart failure.
[ "Currently, the most effective representation learning techniques for text expand on document level representations by learning language models.", "A language model is a probabilistic model of sequences of words, as opposed to LSI, which models only the count matrix, or word2vec, which models only the probability of individual words given context.", "These language models are often formulated with complex neural networks with millions of parameters that capture the language generation process by predicting a word at a time, either sequentially with recurrent neural networks #OTHEREFR or via masking with Transformer models #OTHEREFR .", "Work on representation learning for EHR data has focused on adapting methods successfully used in NLP.", "One family of approaches treats medical codes as words, and learns representations for medical codes #OTHEREFR by adapting word2vec to deal with the lack of intrinsic ordering of medical codes within an encounter." ]
[ "In follow up work, Choi et al extend this approach to simultaneously learn medical code and patient level representations #OTHEREFR .", "However, later evaluations on clinical outcomes found this approach was little better than several other baselines in predicting heart failure #OTHEREFR .", "Finally, Miotto et al #OTHEREFR learns patient level representations using autoencoders, reporting significantly better performance than their baselines on the task of predicting future new diagnosis codes.", "However, in Choi et al #OTHEREFR , stacked autoencoders were found to be no better than other baselines at predicting next encounter diagnosis codes.", "Therefore, the utility of learning general purpose representations of EHR data for better predicting clinical outcomes remains unclear." ]
[ "code vectors", "heart failure" ]
background
{ "title": "Language Models Are An Effective Patient Representation Learning Technique For Electronic Health Record Data", "abstract": "Widespread adoption of electronic health records (EHRs) has fueled development of clinical outcome models using machine learning. However, patient EHR data are complex, and how to optimally represent them is an open question. This complexity, along with often small training set sizes available to train these clinical outcome models, are two core challenges for training high quality models. In this paper, we demonstrate that learning generic representations from the data of all the patients in the EHR enables better performing prediction models for clinical outcomes, allowing for these challenges to be overcome. We adapt common representation learning techniques used in other domains and find that representations inspired by language models enable a 3.5% mean improvement in AUROC on five clinical outcomes compared to standard baselines, with the average improvement rising to 19% when only a small number of patients are available for training a prediction model for a given clinical outcome." }
{ "title": "Medical Concept Representation Learning from Electronic Health Records and its Application on Heart Failure Prediction", "abstract": "Objective: To transform heterogeneous clinical data from electronic health records into clinically meaningful constructed features using data driven method that rely, in part, on temporal relations among data. The clinically meaningful representations of medical concepts and patients are the key for health analytic applications. Most of existing approaches directly construct features mapped to raw data (e.g., ICD or CPT codes), or utilize some ontology mapping such as SNOMED codes. However, none of the existing approaches leverage EHR data directly for learning such concept representation. We propose a new way to represent heterogeneous medical concepts (e.g., diagnoses, medications and procedures) based on co-occurrence patterns in longitudinal electronic health records. The intuition behind the method is to map medical concepts that are co-occuring closely in time to similar concept vectors so that their distance will be small. We also derive a simple method to construct patient vectors from the related medical concept vectors. For qualitative evaluation, we study similar medical concepts across diagnosis, medication and procedure. In quantitative evaluation, our proposed representation significantly improves the predictive modeling performance for onset of heart failure (HF), where classification methods (e.g. logistic regression, neural network, support vector machine and K-nearest neighbors) achieve up to 23% improvement in area under the ROC curve (AUC) using this proposed representation. We proposed an effective method for patient and medical concept representation learning. The resulting representation can map relevant concepts together and also improves predictive modeling performance." }
1811.11005
1602.03686
Experimental Results
When comparing our results with previous studies which used clinical concept embeddings to predict HF onset in a similar experimental setup, our approach achieved broadly similar (but slightly worse) overall performance and followed similar patterns: Choi #REFR et al.
[ "For models using the less extensive corpuses, the best performing results were observed with vectors of smaller size (50 dimensions) and larger context windows (ranging from #OTHEREFR .", "Although, counter-intuitively, the PRIMDX best embedding outperformed PRIMDX-PROC (using procedures and primary diagnoses), PRIMDX-PROC performed better than PRIMDX on average across all vector size and context window combinations.", "This suggests that clinical concept vectors could be beneficial for risk prediction in absence of a domain ontology or in a semi-supervised fashion combined with labelled data to boost performance #OTHEREFR .", "Table 2 : Best performing embeddings in test dataset with optimal hyper-parameters.", "Direct comparison with previous studies is challenging due to the use of different underlying populations, study designs and incomplete definitions of cohorts and outcomes #OTHEREFR ." ]
[ "utilized clinical concept vectors trained using word2vec skip-gram and reported an AUROC of 0.711 with one-hot encoded input and AUROC of 0.743 using embeddings with a SVM classifier.", "Interestingly, the fact that we observed similar (albeit slightly worse) results when using data from multiple hospitals compared to a study sourcing data from a single hospital indicates that embedding approaches can potentially be a very useful tool for scaling analyses across large heterogeneous data source and are insensitive to source variations." ]
[ "clinical concept embeddings" ]
result
{ "title": "Application of Clinical Concept Embeddings for Heart Failure Prediction in UK EHR data", "abstract": "Electronic health records (EHR) are increasingly being used for constructing disease risk prediction models. Feature engineering in EHR data however is challenging due to their highly dimensional and heterogeneous nature. Low-dimensional representations of EHR data can potentially mitigate these challenges. In this paper, we use global vectors (GloVe) to learn word embeddings for diagnoses and procedures recorded using 13 million ontology terms across 2.7 million hospitalisations in national UK EHR. We demonstrate the utility of these embeddings by evaluating their performance in identifying patients which are at higher risk of being hospitalized for congestive heart failure. Our findings indicate that embeddings can enable the creation of robust EHR-derived disease risk prediction models and address some the limitations associated with manual clinical feature engineering." }
{ "title": "Medical Concept Representation Learning from Electronic Health Records and its Application on Heart Failure Prediction", "abstract": "Objective: To transform heterogeneous clinical data from electronic health records into clinically meaningful constructed features using data driven method that rely, in part, on temporal relations among data. The clinically meaningful representations of medical concepts and patients are the key for health analytic applications. Most of existing approaches directly construct features mapped to raw data (e.g., ICD or CPT codes), or utilize some ontology mapping such as SNOMED codes. However, none of the existing approaches leverage EHR data directly for learning such concept representation. We propose a new way to represent heterogeneous medical concepts (e.g., diagnoses, medications and procedures) based on co-occurrence patterns in longitudinal electronic health records. The intuition behind the method is to map medical concepts that are co-occuring closely in time to similar concept vectors so that their distance will be small. We also derive a simple method to construct patient vectors from the related medical concept vectors. For qualitative evaluation, we study similar medical concepts across diagnosis, medication and procedure. In quantitative evaluation, our proposed representation significantly improves the predictive modeling performance for onset of heart failure (HF), where classification methods (e.g. logistic regression, neural network, support vector machine and K-nearest neighbors) achieve up to 23% improvement in area under the ROC curve (AUC) using this proposed representation. We proposed an effective method for patient and medical concept representation learning. The resulting representation can map relevant concepts together and also improves predictive modeling performance." }
1907.09600
1602.03686
INTRODUCTION
In #REFR embeddings of the aforementioned types of codes are use to predict heart failure.
[ "Especially due to the presence of free text and of di erent types clinical codes, EHR data requires potentially very high dimensional representations of patient information, with a challenge to design machine learning models that for many institutions like ours can only be trained over a relatively limited number of instances.", "Word embeddings techniques such as Word2Vec #OTHEREFR and GloVe #OTHEREFR are unsupervised approaches to represent text in low dimensional spaces.", "ey are based on the principle that di erent words in similar contexts may have similar meanings and therefore can be represented by similar vectors.", "ere is a growing interest in applying embeddings to the healthcare domain to generate patient representations.", "In #OTHEREFR , the authors trained Word2Vec embeddings over LOINC, ICD-9 and NDC codes associated to the insurance claims of 4 millions of patients and evaluated them over a series of known relationships among medical concepts (e.g disease A is treated by medication B)." ]
[ "To date, the embeddings of medical concepts trained on the largest amount of data (insurance claims and medical notes from 60 million patients) are those presented in [1] . Embeddings are trained from physician DSHealth, Anchorage, AK 2019. 978-x-xxxx-xxxx-x/YY/MM. . .", "$15.00 DOI: 10.1145/nnnnnnn.nnnnnnn notes of 100,000 and 250,000 general hospital patients and evaluated over predictive tasks in #OTHEREFR , #OTHEREFR respectively.", "In contrast with the se ings of the aforementioned works, in our organization we have access to data from fewer patients (in the order of tens of thousands), mostly a ected by cancer.", "We focus on embedding representations for laboratory test data.", "In an analogy with the application of embedding representations to natural language processing, we can see each lab code as a word, a lab order as a sentence and a visit as a document." ]
[ "embeddings" ]
method
{ "title": "Evaluation of Embeddings of Laboratory Test Codes for Patients at a Cancer Center", "abstract": "Laboratory test results are an important and generally high dimensional component of a patient's Electronic Health Record (EHR). We train embedding representations (via Word2Vec and GloVe) for LOINC codes of laboratory tests from the EHRs of about 80,000 patients at a cancer center. To include information about lab test outcomes, we also train embeddings on the concatenation of a LOINC code with a symbol indicating normality or abnormality of the result. We observe several clinically meaningful similarities among LOINC embeddings trained over our data. For the embeddings of the concatenation of LOINCs with abnormality codes, we evaluate the performance for mortality prediction tasks and the ability to preserve ordinality properties: i.e. a lab test with normal outcome should be more similar to an abnormal one than to the a very abnormal one." }
{ "title": "Medical Concept Representation Learning from Electronic Health Records and its Application on Heart Failure Prediction", "abstract": "Objective: To transform heterogeneous clinical data from electronic health records into clinically meaningful constructed features using data driven method that rely, in part, on temporal relations among data. The clinically meaningful representations of medical concepts and patients are the key for health analytic applications. Most of existing approaches directly construct features mapped to raw data (e.g., ICD or CPT codes), or utilize some ontology mapping such as SNOMED codes. However, none of the existing approaches leverage EHR data directly for learning such concept representation. We propose a new way to represent heterogeneous medical concepts (e.g., diagnoses, medications and procedures) based on co-occurrence patterns in longitudinal electronic health records. The intuition behind the method is to map medical concepts that are co-occuring closely in time to similar concept vectors so that their distance will be small. We also derive a simple method to construct patient vectors from the related medical concept vectors. For qualitative evaluation, we study similar medical concepts across diagnosis, medication and procedure. In quantitative evaluation, our proposed representation significantly improves the predictive modeling performance for onset of heart failure (HF), where classification methods (e.g. logistic regression, neural network, support vector machine and K-nearest neighbors) achieve up to 23% improvement in area under the ROC curve (AUC) using this proposed representation. We proposed an effective method for patient and medical concept representation learning. The resulting representation can map relevant concepts together and also improves predictive modeling performance." }
1908.08594
1602.03686
Results
One example of medical applications is medGAN #REFR an generative adversarial network (GAN) that can be trained on a public database of EHRs and then be used to generate new, synthetic health records.
[ "The applications of deep learning and recurrent neural networks as well as convolutional networks range from computer vision and picture annotation to summarizing, text generation, question answering, and generating new instances of trained material.", "In some sense, RNNs can b ere viewed as the imputation model of deep learning." ]
[ "However, medGAN can also be considered an 'old style' approach just as the approach I used for generating personality items #OTHEREFR , as medGAN was not based on a pre-trained network that already includes a large body of materials in order to give it general capabilities that would be fine-tuned later.", "The latest generation of language models as represented by GPT-2 are pre-trained based on large amounts of material that is available online.", "GPT-2 was trained on 40GB of text collected from the internet, but excluding WikiPedia as it was considered that some researchers may want to use this resource to retrain the base GPT-2 model.", "These types of language models are considered multitask learners by their creators, i.e., they claim these models are systems that can be trained to perform a number of different language related tasks such as summarization, question answering and translation (e.g. #OTHEREFR ).", "This means that a trained model can be used as the basis for further targeted improvement, and that the rudimentary capabilities already trained into the model can be improved by presenting further task specific material." ]
[ "medical applications", "EHRs" ]
background
{ "title": "Training Optimus Prime, M.D.: Generating Medical Certification Items by Fine-Tuning OpenAI's gpt2 Transformer Model", "abstract": "Objective: Showcasing Artificial Intelligence, in particular deep neural networks, for language modeling aimed at automated generation of medical education test items. OpenAI's gpt2 transformer language model was retrained using PubMed's open access text mining database. The retraining was done using toolkits based on tensorflow-gpu available on GitHub, using a workstation equipped with two GPUs. In comparison to a study that used character based recurrent neural networks trained on open access items, the retrained transformer architecture allows generating higher quality text that can be used as draft input for medical education assessment material. In addition, prompted text generation can be used for production of distractors suitable for multiple choice items used in certification exams. The current state of neural network based language models can be used to develop tools in supprt of authoring medical education exams using retrained models on the basis of corpora consisting of general medical text collections. Future experiments with more recent transformer models (such as Grover, TransformerXL) using existing medical certification exam item pools is expected to further improve results and facilitate the development of assessment materials. The aim of this article is to provide evidence on the current state of automated item generation (AIG) using deep neural networks (DNNs). Based on earlier work, a first paper that tackled this issue used character-based recurrent neural networks [1], the current contribution describes an experiment exploring AIG using transformerbased language models [2]. Time flies in the domain of DNNs used for language modeling, indeed: The day this paper was submitted, on August 13th, 2019, to internal review, NVIDIA published yet another, larger language model of the transformer used in this paper. The MegratronLM (apart from taking a bite out of the pun in this article's title) is currently the largest language model based on the transformer architecture [3]. This latest neural network language model has >8 billions of parameters, which is incomprehensible compared to the type of neural networks we used only two decades ago. At that time, in winter semester 1999-2000, I taught classes about artificial Neural Networks (NNs, e.g. [4]). Back then, Artificial Intelligence (AI) already entered what was referred to as AI winter, as most network sizes were limited to rather small architectures unless supercomputers were employed. On smaller machines that were available to most researchers, only rather limited versions of these NNs could be trained and used, so successful applications were rare, even though one of the key contributions that enabled deep learning and a renaissance of NN-based AI, the Long-Short-Term-Memory (LSTM) design [5] was made in those years. In 2017, I started looking into neural networks again because I wanted to learn how to program Graphical Processing Units (GPUs) for computational statistics and high performance computing (HPC) used in estimating psychometric models [6] . This finally led me to write a paper on using deep neural networks for automated item generation [1], a field that has seen many different attempts, but most were only partially successful, and involved a lot of human preparations, and ended up more or less being fill-in-the-blanks approaches such as we see in simple form as MadLibs books for learners. While I was able to generate something that resembled human written personality items, using a public database that contains some 3000, and while several of the (cherry-picked) generated items sounded and functioned a lot like those found in personality inventories [7, 8] , I was somewhat skeptical whether one would be able to properly train neural networks for this task, given that it would require a very large number of items, and I assumed that each network for that purpose would need to be solely trained on items of the form it is supposed to generate. Part of my concern was that the items that were generated had to be hand-picked, as many of the generated character or word sequences ended up not to be properly formed statements. However, those that were selected for an empirical comparison with human coded items were found to show the same dimensionality [1] and hence to be fully useful as replacements of human authored items. Nevertheless, some doubt remained due to the needed handpicking and the limited supply of training material, after all, AI and neural networks have a long history (e.g., [4] ; also [9] ) and have been hyped to be the next big thing that may soon replace humans and take our jobs. As mentioned, items generated using RNNs [1], then cherrypicked, were passing empirical evaluations and hence functioned a lot like the human written items in an online data collection. However, many of the generated items were either not properly formed statements that are typical for this domain, or if the network was trained too long" }
{ "title": "Medical Concept Representation Learning from Electronic Health Records and its Application on Heart Failure Prediction", "abstract": "Objective: To transform heterogeneous clinical data from electronic health records into clinically meaningful constructed features using data driven method that rely, in part, on temporal relations among data. The clinically meaningful representations of medical concepts and patients are the key for health analytic applications. Most of existing approaches directly construct features mapped to raw data (e.g., ICD or CPT codes), or utilize some ontology mapping such as SNOMED codes. However, none of the existing approaches leverage EHR data directly for learning such concept representation. We propose a new way to represent heterogeneous medical concepts (e.g., diagnoses, medications and procedures) based on co-occurrence patterns in longitudinal electronic health records. The intuition behind the method is to map medical concepts that are co-occuring closely in time to similar concept vectors so that their distance will be small. We also derive a simple method to construct patient vectors from the related medical concept vectors. For qualitative evaluation, we study similar medical concepts across diagnosis, medication and procedure. In quantitative evaluation, our proposed representation significantly improves the predictive modeling performance for onset of heart failure (HF), where classification methods (e.g. logistic regression, neural network, support vector machine and K-nearest neighbors) achieve up to 23% improvement in area under the ROC curve (AUC) using this proposed representation. We proposed an effective method for patient and medical concept representation learning. The resulting representation can map relevant concepts together and also improves predictive modeling performance." }
1903.08766
1801.08532
Perfect Affinity
Moreover, the underestimate was largest at the 50% iteration, the iteration that is generally supposed to have the best statistical properties for inference #REFR .
[ "Perfect affinity also drove a large difference between standard lift and Element lift.", "This difference was also dependent upon the proportion of treated members.", "Using the standard lift estimation technique, we found that at the 50% iteration, the new feature increased messages sent by +2.3%, and increased by +3.0% at the 85% iteration.", "By contrast, the Element corrected lift was much more consistent: +4.4% for the 50% iteration and +4.6% for the 85% iteration.", "The standard method for estimating lift dramatically underestimated the increase in messaging caused by Active Status for both iterations." ]
[ "Thus, Element allowed us not only to identify the presence of perfect affinity, but to corrected for it as well.", "From the description of the experimental setup, it might be fairly trivial to recognize that this would be a situation that should have strong network effects.", "Indeed, it makes perfect sense that members in the treatment group were only encouraged to send extra messages to others in the treatment group -these were the only connections for whom they might be able to see the Active Status indicator.", "By using Element we were able clearly confirm these intuitions.", "Perhaps more importantly, Element was able to produce a corrected lift estimate at the 50% iteration that held true at higher ramp percentages." ]
[ "best statistical properties", "50% iteration" ]
background
{ "title": "A Method for Measuring Network Effects of One-to-One Communication Features in Online A/B Tests", "abstract": "A/B testing is an important decision making tool in product development because can provide an accurate estimate of the average treatment effect of a new features, which allows developers to understand how the business impact of new changes to products or algorithms. However, an important assumption of A/B testing, Stable Unit Treatment Value Assumption (SUTVA), is not always a valid assumption to make, especially for products that facilitate interactions between individuals. In contexts like one-to-one messaging we should expect network interference; if an experimental manipulation is effective, behavior of the treatment group is likely to influence members in the control group by sending them messages, violating this assumption. In this paper, we propose a novel method that can be used to account for network effects when A/B testing changes to one-to-one interactions. Our method is an edge-based analysis that can be applied to standard Bernoulli randomized experiments to retrieve an average treatment effect that is not influenced by network interference. We develop a theoretical model, and methods for computing point estimates and variances of effects of interest via network-consistent permutation testing. We then apply our technique to real data from experiments conducted on the messaging product at LinkedIn. We find empirical support for our model, and evidence that the standard method of analysis for A/B tests underestimates the impact of new features in one-to-one messaging contexts." }
{ "title": "SQR: Balancing Speed, Quality and Risk in Online Experiments", "abstract": "Controlled experimentation, also called A/B testing, is widely adopted to accelerate product innovations in the online world. However, how fast we innovate can be limited by how we run experiments. Most experiments go through a \"ramp up\" process where we gradually increase the traffic to the new treatment to 100%. We have seen huge inefficiency and risk in how experiments are ramped, and it is getting in the way of innovation. This can go both ways: we ramp too slowly and much time and resource is wasted; or we ramp too fast and suboptimal decisions are made. In this paper, we build up a ramping framework that can effectively balance among Speed, Quality and Risk (SQR). We start out by identifying the top common mistakes experimenters make, and then introduce the four SQR principles corresponding to the four ramp phases of an experiment. To truly scale SQR to all experiments, we develop a statistical algorithm that is embedded into the process of running every experiment to automatically recommend ramp decisions. Finally, to complete the whole picture, we briefly cover the autoramp engineering infrastructure that can collect inputs and execute on the recommendations timely and reliably. • Mathematics of computing → Probabilistic inference problems • Computing methodologies → Causal reasoning and diagnostics" }
1708.08741
1204.2072
Related Work
The mesoscale dissipative particle dynamics method is applied in #REFR to simulate electrophoresis of a polyelectrolyte in a nanochannel.
[ "Also three-dimensional parallel simulations of electrophoretic separation with continuum approaches have been reported.", "The finite element simulations in #OTHEREFR consider the buffer composition and the ζ-potential at channel walls.", "In #OTHEREFR mixed finite element and finite difference simulations of protein separation were performed on up to 32 processes.", "At the finest level of resolution, fluid, ions, and particles are simulated by Lagrangian approaches.", "These explicit solvent methods #OTHEREFR typically apply coarse-grained molecular dynamics (MD) models to describe the motion of fluid molecules and incorporate Brownian motion #OTHEREFR ." ]
[ "Another explicit solvent method is presented in #OTHEREFR for simulating DNA electrophoresis, modeling DNA as a polymer.", "In both methods the polymer is represented by bead-spring chains with beads represented by a truncated Lennard-Jones potential and connected by elastic spring potentials.", "Explicit solvent models, however, are computationally very expensive, especially for large numbers of fluid molecules due to pairwise interactions #OTHEREFR .", "Moreover, the resolution of solvent, macromolecules, and ions on the same scale limits the maximal problem sizes that can be simulated #OTHEREFR .", "Also the mapping of measurable properties from colloidal suspensions to these particle-based methods is problematic #OTHEREFR ." ]
[ "electrophoresis", "nanochannel" ]
method
{ "title": "Coupled Multiphysics Simulations of Charged Particle Electrophoresis for Massively Parallel Supercomputers", "abstract": "The article deals with the multiphysics simulation of electrokinetic flows. When charged particles are immersed in a fluid and are additionally subjected to electric fields, this results in a complex coupling of several physical phenomena. In a direct numerical simulation, the dynamics of moving and geometrically resolved particles, the hydrodynamics of the fluid, and the electric field must be suitably resolved and their coupling must be realized algorithmically. Here the two-relaxation-time variant of the lattice Boltzmann method is employed together with a momentum-exchange coupling to the particulate phase. For the electric field that varies in time according to the particle trajectories, a quasistatic continuum model and its discretization with finite volumes is chosen. This field is coupled to the particulate phase in the form of an acceleration due to electrostatic forces and conversely via the respective charges as boundary conditions for the electric potential equation. The electric field is also coupled to the fluid phase by modeling the effect of the ion transport on fluid motion. With the multiphysics algorithm presented in this article, the resulting multiply coupled, interacting system can be simulated efficiently on massively parallel supercomputers. This algorithm is implemented in the waLBerla framework, whose modular software structure naturally supports multiphysics simulations by allowing to flexibly combine different models. The largest simulation of the complete system reported here performs more than 70 000 time steps on more than five billion (5 × 10 9 ) mesh cells for both the hydrodynamics, as represented by a D3Q19 lattice Boltzmann automaton, and the scalar electric field. The computations are executed in a fully scalable fashion on up to 8192 processor cores of a current supercomputer." }
{ "title": "Mesoscopic Simulations of Electroosmotic Flow and Electrophoresis in Nanochannels", "abstract": "We review recent dissipative particle dynamics (DPD) simulations of electrolyte flow in nanochannels. A method is presented by which the slip length δ B at the channel boundaries can be tuned systematically from negative to infinity by introducing suitably adjusted wall-fluid friction forces. Using this method, we study electroosmotic flow (EOF) in nanochannels for varying surface slip conditions and fluids of different ionic strength. Analytic expressions for the flow profiles are derived from the Stokes equation, which are in good agreement with the numerical results. Finally, we investigate the influence of EOF on the effective mobility of polyelectrolytes in nanochannels. The relevant quantity characterizing the effect of slippage is found to be the dimensionless quantity κδ B , where 1/κ is an effective electrostatic screening length at the channel boundaries." }
1806.09757
1706.00890
Adaptive guaranteed-performance consensus design for leaderless cases
Because subsystem (5) describes the consensus motion of multiagent system (1), the following corollary can be obtained by #REFR .
[ "Furthermore, a large γ may regulate the consensus control gain by Theorem 1, so we can choose some proper γ and P to regulate the consensus control gain.", "We introduce a gain factor δ > 0 such that P ≤ δI, where δ can also be regarded as an upper bound of the eigenvalue of P.", "Thus, one can show that PBB T P ≤ δ 2 BB T if the maximum eigenvalue of BB T is not larger than 1.", "Based on LMI techniques, by Schur complement lemma in #OTHEREFR , an adaptive guaranteed-performance consensualization criterion with a given gain factor is proposed as follows.", "In the following, we give an approach to determine the consensus motion. Due to e" ]
[]
[ "consensus motion", "multiagent system" ]
background
{ "title": "Adaptive guaranteed-performance consensus design for high-order multiagent systems", "abstract": "The current paper addresses the distributed guaranteed-performance consensus design problems for general high-order linear multiagent systems with leaderless and leaderfollower structures, respectively. The information about the Laplacian matrix of the interaction topology or its minimum nonzero eigenvalue is usually required in existing works on the guaranteed-performance consensus, which means that their conclusions are not completely distributed. A new translation-adaptive strategy is proposed to realize the completely distributed guaranteed-performance consensus control by using the structure feature of a complete graph in the current paper. For the leaderless case, an adaptive guaranteed-performance consensualization criterion is given in terms of Riccati inequalities and a regulation approach of the consensus control gain is presented by linear matrix inequalities. Extensions to the leader-follower cases are further investigated. Especially, the guaranteed-performance costs for leaderless and leader-follower cases are determined, respectively, which are associated with the intrinsic structure characteristic of the interaction topologies. Finally, two numerical examples are provided to demonstrate theoretical results." }
{ "title": "On Almost Controllability of Dynamical Complex Networks with Noises", "abstract": "Abstract: This paper discusses the controllability problem of complex networks. It is shown that almost any weighted complex network with noise on the strength of communication links is controllable in the sense of Kalman controllability. The concept of almost controllability is elaborated by both theoretical discussions and experimental verifications." }
1905.03404
1706.00890
D. Convergence speed analysis
In the following, we determine the convergence coefficient of multi-agent system (1) under control protocol #REFR and compare it with the convergence coefficient under the standard consensus protocol.
[ "Then the lower bound of ( ) t  is shown as min  which means the minimum convergence speed of multi-agent system (1).", "As a matter of fact, equation (28) originates from Definition 2.", "Then with regard to multiagent system (1) under the standard control protocol in #OTHEREFR , where ( ) ( ) ( )", "It can be seen that min 2 2    ; that is, the lower bound of ( ) t  is directly associated with the algebraic connectivity.", "Since there exists the relationship between min  and the algebraic connectivity, it is rational and convenient to use min  to describe the convergence speed to some extent." ]
[ "In order to ensure the effectiveness of this comparison, the control gain  is also considered in the standard consensus protocol as a reference, which means that", ". In this case, one can directly obtain that min,1", "Then substituting (17) into #OTHEREFR , the convergence coefficient of multi-agent system (1) under control protocol (2) can be described as", "Therefore, the following theorem can be obtained." ]
[ "standard consensus protocol", "control protocol" ]
method
{ "title": "Adaptive Guaranteed-Performance Consensus Control for Multiagent Systems With an Adjustable Convergence Speed", "abstract": "Adaptive guaranteed-performance consensus control problems for multi-agent systems are investigated, where the adjustable convergence speed is discussed. This paper firstly proposes a novel adaptive guaranteed-performance consensus protocol, where the communication weights can be adaptively regulated. By the state space decomposition method and the stability theory, sufficient conditions for guaranteed-performance consensus are obtained, as well as the guaranteed-performance cost. Moreover, since the convergence speed is usually adjusted by changing the algebraic connectivity in existing works, which increases the communication burden and the load of the controller, and the system topology is always given in practical applications, the lower bound of the convergence coefficient for multi-agent systems with the adaptive guaranteed-performance consensus protocol is deduced, which is linearly adjustable approximately by changing the adaptive control gain. Finally, simulation examples are introduced to demonstrate theoretical results." }
{ "title": "On Almost Controllability of Dynamical Complex Networks with Noises", "abstract": "Abstract: This paper discusses the controllability problem of complex networks. It is shown that almost any weighted complex network with noise on the strength of communication links is controllable in the sense of Kalman controllability. The concept of almost controllability is elaborated by both theoretical discussions and experimental verifications." }
1912.13457
1803.11182
Analysis of the Framework
Similar to IPGAN #REFR , its results suffer from artifacts like blurriness, since a lot of attributes information from the target images are lost.
[ "We also visualize the masks M k of AAD layers on different levels in Figure 8 , where a brighter pixel indicates a higher weight for identity embedding in Equation #OTHEREFR .", "It shows that the identity embedding takes more effect in low level layers.", "Its effective region becomes sparser in middle levels, where it activates only in some key regions that strongly relates to the face identity, such as the locations of eyes, mouth and face contours.", "Multi-level Attributes: To verify whether it is necessary to extract multi-level attributes, we compare with another baseline model called Compressed, which shares the same network structure with AEI-Net, but only utilizes the first three level embeddings z k att , k = 1, 2, 3.", "Its last embedding z 3 att is fed into all higher level AAD integrations. Its results are also compared in Figure 7 ." ]
[ "To understand what is encoded in the attributes embedding, we concatenate the embeddings z k att (bilinearly upsampled to 256 × 256 and vectorized) from all levels as a unified attribute representation. We conduct PCA to reduce vector dimensions as 512.", "We then perform tests querying faces from the training set with the nearest L-2 distances of such vectors.", "The three results illustrated in Figure 9 verify our intention, that the attributes embeddings can well reflect face attributes, such as the head pose, hair color, expression and even the existence of sunglasses on the face.", "Thus it also explains why our AEI-Net sometimes can preserve occlusions like sunglasses on the target face even without a", "(2) (3) (4) (5) (6) (7) (8) (9) (10)" ]
[ "attributes" ]
background
{ "title": "FaceShifter: Towards High Fidelity And Occlusion Aware Face Swapping", "abstract": ": The face in the source image is taken to replace the face in the target image. Results of FaceShifter appear in the right. In this work, we propose a novel two-stage framework, called FaceShifter, for high fidelity and occlusion aware face swapping. Unlike many existing face swapping works that leverage only limited information from the target image when synthesizing the swapped face, our framework, in its first stage, generates the swapped face in high-fidelity by exploiting and integrating the target attributes thoroughly and adaptively. We propose a novel attributes encoder for extracting multi-level target face attributes, and a new generator with carefully designed Adaptive Attentional Denormalization (AAD) layers to adaptively integrate the identity and the attributes for face synthesis. To address the challenging facial occlusions, we append a second stage consisting of a novel Heuristic Error Acknowledging Refinement Network (HEAR-Net). It is trained to recover anomaly regions in a self-supervised way without any manual annotations. Extensive experiments on wild faces demonstrate that our face swapping results are not only considerably more perceptually appealing, but also better identity preserving in comparison to other state-of-the-art methods." }
{ "title": "Towards Open-Set Identity Preserving Face Synthesis", "abstract": "We propose a framework based on Generative Adversarial Networks to disentangle the identity and attributes of faces, such that we can conveniently recombine different identities and attributes for identity preserving face synthesis in open domains. Previous identity preserving face synthesis processes are largely confined to synthesizing faces with known identities that are already in the training dataset. To synthesize a face with identity outside the training dataset, our framework requires one input image of that subject to produce an identity vector, and any other input face image to extract an attribute vector capturing, e.g., pose, emotion, illumination, and even the background. We then recombine the identity vector and the attribute vector to synthesize a new face of the subject with the extracted attribute. Our proposed framework does not need to annotate the attributes of faces in any way. It is trained with an asymmetric loss function to better preserve the identity and stabilize the training process. It can also effectively leverage large amounts of unlabeled training face images to further improve the fidelity of the synthesized faces for subjects that are not presented in the labeled training face dataset. Our experiments demonstrate the efficacy of the proposed framework. We also present its usage in a much broader set of applications including face frontalization, face attribute morphing, and face adversarial example detection." }
1705.00133
1603.01445
Equivalence with Sato's Definition
In recent work on verifying differential privacy over general, continuous distributions, Sato #REFR proposes an alternative definition of approximate lifting.
[]
[ "In the special case of discrete distributions, where measurability of events can be forgotten, his definition can be stated as follows.", "Definition 13 (Sato [12] ).", "Let µ 1 ∈ D(A) and µ 2 ∈ D(B), R be a binary relation over", "Notice that this definition has no witness distributions at all; instead, it uses a universal quantifier over all subsets.", "We can show that -liftings are equivalent to Sato's definition in the case of discrete distributions." ]
[ "differential privacy" ]
background
{ "title": "Relational $\\star$-Liftings for Differential Privacy", "abstract": "Recent developments in formal verification have identified approximate liftings (also known as approximate couplings) as a clean, compositional abstraction for proving differential privacy. There are two styles of definitions for this construction. Earlier definitions require the existence of one or more witness distributions, while a recent definition by Sato uses universal quantification over all sets of samples. These notions have different strengths and weaknesses: the universal version is more general than the existential ones, but the existential versions enjoy more precise composition principles. We propose a novel, existential version of approximate lifting, called -lifting, and show that it is equivalent to Sato's construction for discrete probability measures. Our work unifies all known notions of approximate lifting, giving cleaner properties, more general constructions, and more precise composition theorems for both styles of lifting, enabling richer proofs of differential privacy. We also clarify the relation between existing definitions of approximate lifting, and generalize our constructions to approximate liftings based on f -divergences. 1998 ACM Subject Classification D.2.4 Software/Program Verification survey [6] for an overview of this growing field.) In this paper, we consider approaches based on relational program logics [2-5, 11, 12]. To capture the quantitative nature of differential privacy, these systems rely on a quantitative generalization of probabilistic couplings (see, e.g., [10, 14, 15] ), called approximate liftings or ( , δ)-liftings. Existing works have considered several potential definitions. While all definitions support compositional reasoning and enable program logics that can verify complex examples from the privacy literature, the various notions of approximate liftings have different strengths and weaknesses. Broadly speaking, one class of definitions require the existence of one or two witness distributions that \"couple\" the two executions of programs. The earliest definition [3] supports accuracy-based reasoning for the Laplace mechanism, while subsequent definitions [2, 11] support more precise composition principles from differential privacy and can be generalized to other notions of distance on distributions. These definitions, and their associated program logics, were designed for discrete distributions. In the course of extending these ideas to continuous distributions, Sato [12] proposes a radically different notion of approximate lifting, which does not rely on witness distributions. Instead, it uses a universal quantification over all sets of samples. Sato shows that this definition is strictly more general than the existential versions, but it is unclear (a) whether the gap can be closed and (b) whether his construction satisfies the same composition principles enjoyed by some existential definitions. As a consequence, there is currently no single approximate lifting with the properties needed to support all existing formalized proofs of differential privacy. Furthermore, some of the most involved privacy proofs cannot be formalized at all, as their proofs require a combination of tools from several kinds of approximate liftings. After reviewing the necessary mathematical preliminaries in Section 2, we introduce our main technical contribution: a new, existential definition of approximate lifting. This construction, which we call -lifting, is a generalization of an existing definition by Barthe and Olmedo [2], Olmedo [11] . The key idea is to allow the witness distributions to have a larger domain, broadening the class of approximate liftings. By a maximum flow/minimum cut argument, we show that -liftings are equivalent to Sato's lifting over discrete distributions. This equivalence can be viewed as an approximate version of Strassen's theorem [13] , a classical result in probability theory describing the existence of probabilistic couplings. We present the definition of -lifting and the proof of equivalence in Section 3. Then, we show that -liftings satisfy desirable theoretical properties. We are able to leverage the equivalence of liftings in two ways. In one direction, Sato's definition gives simpler proofs of more general properties of -liftings. In the other direction, -liftings-like other existential definitions-can smoothly incorporate composition principles from the theory of differential privacy. Our connection shows that Sato's definition can use these principles in the discrete case. We describe the key theoretical properties of -liftings in Section 4. Finally, we provide a thorough comparison of -lifting with existing definitions of approximate lifting in Section 5, and describe how to construct -liftings for more general version of approximate liftings based on f -divergences in Section 6. Overall, the equivalence of -liftings and Sato's lifting, along with the natural theoretical properties satisfied by the common notion, suggest that these definitions are two views on the same concept: an approximate version of probabilistic coupling." }
{ "title": "Approximate Relational Hoare Logic for Continuous Random Samplings", "abstract": "Approximate relational Hoare logic (apRHL) is a logic for formal verification of the differential privacy of databases written in the programming language pWHILE. Strictly speaking, however, this logic deals only with discrete random samplings. In this paper, we define the graded relational lifting of the subprobabilistic variant of Giry monad, which described differential privacy. We extend the logic apRHL with this graded lifting to deal with continuous random samplings. We give a generic method to give proof rules of apRHL for continuous random samplings." }
1710.09010
1603.01445
I. INTRODUCTION
Previous work #REFR has considered a different semantic model for standard differential privacy over continuous distributions using witness-free relational lifting, but it is not clear how to extend this model beyond differential privacy.
[ "First, Rényi divergences are not f -divergences (for one differenc, f -divergences are jointly convex while Rényi divergences are only quasi-convex #OTHEREFR ), moreover, zCDP and tCDP are supremums of Rényi divergences.", "As a result, these properties cannot be described in terms of f -divergences, nor captured in f pRHL.", "We develop new relational liftings supporting significantly more general divergences, allowing direct reasoning about RDP, zCDP, and tCDP.", "Second, the 2-witness relational lifting approach has only been proposed for discrete distributions, while many algorithms satisfying relaxations of differential privacy-indeed, the motivating examples-sample from continuous distributions, such as the Gaussian distribution.", "Handling these distributions requires a careful treatment of measure theory." ]
[ "To overcome these challenges, we generalize 2-witness liftings in two directions.", "First, we replace the notion of fdivergence with a more general class of divergences, identifying the basic properties needed for compositional reasoning. Second, we generalize to continuous probability measures.", "The main challenge is establishing a sequential composition principle-the continuous case introduces measurability requirements for composition.", "Accordingly, we extend the structure of 2-witness liftings to a new notion called approximate span-liftings, which have the necessary data to ensure closure under sequential composition.", "Finally, we instantiate our general model with divergences for RDP, zCDP, and tCDP, establishing categorical properties needed to build approximate span-liftings." ]
[ "differential privacy" ]
background
{ "title": "Approximate Span Liftings: Compositional Semantics for Relaxations of Differential Privacy", "abstract": "Abstract-We develop new abstractions for reasoning about three relaxations of differential privacy: Rényi differential privacy, zero-concentrated differential privacy, and truncated concentrated differential privacy, which express bounds on statistical divergences between two output probability distributions. In order to reason about such properties compositionally, we introduce approximate span-lifting, a novel construction extending the approximate relational lifting approaches previously developed for standard differential privacy to a more general class of divergences, and also to continuous distributions. As an application, we develop a program logic based on approximate span-liftings capable of proving relaxations of differential privacy and other statistical divergence properties." }
{ "title": "Approximate Relational Hoare Logic for Continuous Random Samplings", "abstract": "Approximate relational Hoare logic (apRHL) is a logic for formal verification of the differential privacy of databases written in the programming language pWHILE. Strictly speaking, however, this logic deals only with discrete random samplings. In this paper, we define the graded relational lifting of the subprobabilistic variant of Giry monad, which described differential privacy. We extend the logic apRHL with this graded lifting to deal with continuous random samplings. We give a generic method to give proof rules of apRHL for continuous random samplings." }
1708.05486
1201.0917
Arbitrary paths
Proof: Kratochvíl and Ueckerdt #REFR showed that the non-crossing connectors problem always has a solution when the regions form a collection of pseudo-disks [11, Theorem 2] (i.e., the boundaries of any two regions intersect in at most two points). In our context, the regions are the tubes.
[ "However, that is not always the case, as the example in Fig. 9(b) shows.", "Nevertheless, if we disallow double intersections, then we can still decide in polynomial time whether a solution exists.", "The key idea is to use a result by Kratochvíl and Ueckerdt #OTHEREFR that states that if the regions (in our case, tubes) form a set of pseudo-disks, then there is always a solution.", "Two tubes with a single intersection may not be pseudo-disks, but we can try to convert them into pseudo-disks by cutting off parts that cannot be used in any solution.", "This leads to a procedure that allows us to determine in polynomial time if a solution exists." ]
[ "To apply their result to our problem we need two things.", "First, the tubes need to be pseudo-disks.", "If no two tubes fully cross or create a double intersection, the only way in which they can interact is through single intersections.", "Two tubes that intersect in a single intersection are not always pseudo-disks, since the tube boundaries can intersect in four points.", "However, it is possible to make them pseudo-disks by cutting off the part of one of the tubes that sticks out of the other, as shown in Fig. 10(a) . We refer to this part as an ear." ]
[ "non-crossing connectors problem" ]
background
{ "title": "Non-crossing paths with geographic constraints", "abstract": "A geographic network is a graph whose vertices are restricted to lie in a prescribed region in the plane. In this paper we begin to study the following fundamental problem for geographic networks: can a given geographic network be drawn without crossings? We focus on the seemingly simple setting where each region is a vertical segment, and one wants to connect pairs of segments with a path that lies inside the convex hull of the two segments. We prove that when paths must be drawn as straight line segments, it is NP-complete to determine if a crossing-free solution exists, even if all vertical segments have unit length. In contrast, we show that when paths must be monotone curves, the question can be answered in polynomial time. In the more general case of paths that can have any shape, we show that the problem is polynomial under certain assumptions." }
{ "title": "Non-crossing Connectors in the Plane", "abstract": "We consider the non-crossing connectors problem, which is stated as follows: Given n simply connected regions R 1 , . . . , R n in the plane and finite point sets P i ⊂ R i for i = 1, . . . , n, are there non-crossing connectors γ i for (R i , P i ), i.e., arc-connected sets γ i with We prove that non-crossing connectors do always exist if the regions form a collection of pseudo-disks, i.e., the boundaries of every pair of regions intersect at most twice. We provide a simple polynomial-time algorithm if the regions are axis-aligned rectangles. Finally we prove that the general problem is NP-complete, even if the regions are convex, the boundaries of every pair of regions intersect at most four times and P i consists of only two points on the boundary of R i for i = 1, . . . , n." }
1911.08700
1504.04061
Related Works
A even more special case of (4) is the synchronization over Z 2 = {1, −1} #REFR , which assume that x i in (4) are real-valued and x i = ±1.
[ "Using a more involved argument and a modified power method, Zhong and Boumal improved the bound in #OTHEREFR to σ = O( m log m ).", "In fact, this paper follows this line of works and solve the problem of (1), based on it convex relaxation #OTHEREFR .", "There are works that solve phase synchronization without using the optimization problem (4).", "#OTHEREFR studies the problem from the landscape of a proposed objective function and shows that the global minimizer is unique even when the associated graph is incomplete and follows from the Erdös-Rényi random graphs.", "#OTHEREFR proposes an approximate message passing (AMP) algorithm, and analyzes its behavior by identifying phases where the problem is easy, computationally hard, and statistically impossible." ]
[ "For this problem, #OTHEREFR shows that the solution of (5) matches the minimax lower bound on the optimal Bayes error rate for original problem (4).", "If d 1 = · · · = d m = r > 2, it is called the problem of synchronization of rotations in some literature.", "#OTHEREFR studies it from the perspective of estimation on Riemannian manifolds, and derive the Cramér-Rao bounds of synchronization, that is, lower bounds on the variance of unbiased estimators, and #OTHEREFR shows that a lower bound concentrates on its expectation.", "Distributed algorithms with theoretical guarantees on convergence are proposed in #OTHEREFR .", "The formulation (1) has applications in graph realization and point cloud registration, multiview Structure from Motion (SfM) #OTHEREFR , common lines in Cryo-EM #OTHEREFR , orthogonal least squares #OTHEREFR , and 2D/3D point set registration #OTHEREFR ." ]
[ "synchronization" ]
background
{ "title": "Tightness of the semidefinite relaxation for orthogonal trace-sum maximization", "abstract": "This paper studies an optimization problem on the sum of traces of matrix quadratic forms on m orthogonal matrices, which can be considered as a generalization of the synchronization of rotations. While the problem is nonconvex, the paper shows that its semidefinite programming relaxation can solve the original nonconvex problems exactly, under an additive noise model with small noise in the order of O(−m 1/4 ), where m is the number of orthogonal matrices. This result can be considered as a generalization of existing results on phase synchronization. This paper considers the problem of estimating m orthogonal matrices O 1 , · · · , O m with O i ∈ R d i ×r from the optimization problem: This problem is called orthogonal trace-sum maximization [30] and has application in generalized canonical correlation analysis. If d 1 = · · · = d m = r, then (1) is reduced to the the little Grothendieck problem over the orthogonal group [5], which have wide applications such as multireference alignment [4], cryo-EM [26, 31], 2D/3D point set registration [20, 15, 11], and multiview structure from motion [2, 3, 28]. While the optimization problem (1) is nonconvex and difficult to solve, Won et. al [30] studies its convex relaxation as follow. Let" }
{ "title": "Synchronization over Z2 and community detection in multiplex signed networks with constraints", "abstract": "Abstract. Finding group elements from noisy measurements of their pairwise ratios is also known as the group synchronization problem, first introduced in the context of the group SO(2) of planar rotations. The usefulness of synchronization over the group Z 2 has been demonstrated in recent algorithms for localization of sensor networks and three-dimensional structuring of molecules. In this paper, we focus on synchronization over Z 2 , and consider the problem of identifying communities in a multiplex network when the interaction between the nodes is described by a signed (and possibly weighted) measure of similarity, and when the multiplex network has a natural partition into two communities, of possibly different sizes. In the setting where one has the additional information that certain subsets of nodes represent the same (unknown) group element, we consider and compare several algorithms for synchronization over Z 2 , based on spectral and semidefinite programming relaxations (SDP), and message passing algorithms. In other words, all nodes within such a subset represent the same unknown group element, and one has available noisy pairwise measurements between pairs of nodes that belong to different non-overlapping subsets. Following a recent analysis of the eigenvector method for synchronization over SO (2), we analyze the robustness to noise of the eigenvector method for synchronization over Z 2 , when the underlying graph of pairwise measurements is the Erdős-Rényi random graph, using results from the random matrix theory literature on the largest eigenvalue of rank-1 deformation of large random matrices. We also propose a message passing synchronization algorithm, inspired by the standard belief propagation algorithm, that outperforms the existing eigenvector synchronization algorithm only for certain classes of graphs and noise models, and enjoys the flexibility of incorporating additional constraints that may not be easily accommodated by any of the other spectral or SDP-based methods. We apply the synchronization methods both to several synthetic models and a real data set of roll call voting patterns in the U.S. Congress across time, to identify the two existing communities, i.e., the Democratic and Republican parties. Finally, we discuss a number of related open problems and future research directions. Key words. Eigenvectors, group synchronization, semidefinite programming, multiplex networks, spectral algorithms, random matrix theory, bipartite networks, message passing algorithms, voting networks. 1. Introduction. During the last decade, the emerging area of network science has witnessed an explosive growth, with virtually thousands of papers written on the topic [42, 41] . Much of this work has focused on the detection of a mesoscale structure known as community structure, where subgroups called communities are composed of nodes densely connected with each other, while the connection between nodes across different communities is relatively sparse. There is already a vast literature on community detection [52, 26, 29, 43] [54] , and brain networks from the neuroscience community [9] . In terms of the underlying (possibly weighted) graph associated to the network, the mathematical task is that of identifying clusters of highly interconnected nodes, or subgraphs whose internal edge density is large compared to the rest of the graph. In this paper, we consider the related problem of identifying communities in a graph, for the particular case when the interaction between the nodes is described by a signed measure of similarity or correlation, and when the network has a natural separation into two (not necessarily equally sized) communities. The goal is to recover the two subgroups of nodes whose internal pairwise similarity or correlation is significantly stronger when compared to the rest of the network. Note that in the ground truth solution, each node belongs to exactly one of the two communities, and the task is to recover these two communities given a noisy set of pairwise signed interactions. Signed networks have also been recently considered in the context of identifying community structure in a voting network in the United States General Assembly [38] . In a dynamical setting, community detection algorithms have been used to analyze multiple time series data, such as rates in the foreign exchange market [23] . In [31] , the authors introduced a multiresolution module detection approach for dense weighted networks, and successfully applied it to stock price correlations data. A random matrix theory based technique has been recently proposed for the particular task of clustering correlation data [37] , and was shown to be able to capture well-known structural properties of the financial stock market. In this work, the graphs we consider have a special structure, in the form of a multiplex network, in the sense that each graph can be decomposed into a sequence of subgraphs, each of which corresponds to a layer of the network, and there exist interconnections linking nodes across different layers. We refer the reader to [20] for a mathematical formulation of multilayer networks, of which multiplex networks are a subset. Unlike a multilayer network, a multiplex network only allows for a single type of inter-layer connections via which any given node is connected only to its counterpart nodes in the other layers. The rich variety of interlayer connections in a multilayer" }
1501.06678
1502.06732
Definition 5 (Directed Edge Laplacian) The edge Laplacian of a directed graph G is defined as
The proof for the weighted version of L e (G) can be easily extended from lemma 5 of our previous work #REFR , thus the detail is omitted here.
[ "To provide a deeper insights into what the edge Laplacian L e (G) offers in the analysis and synthesis of multi-agent systems, we propose the following lemma. PROOF." ]
[ "Obviously, if G = G T , then G has L = N − 1 edges and all the eigenvalues of L e (G) are nonzero.", "In the following paper, when we deal with a quasi-strongly connected graph, it refers to a general directed graph G = G T ∪G C unless noted otherwise.", "Lemma 7 Considering a quasi-strongly connected graph G of order N, the edge Laplacian L e (G) has L − N + 1 zero eigenvalues and zero is a simple root of the minimal polynomial of L e (G).", "PROOF.", "The result can be lightly extended from lemma 6 of our previous work #OTHEREFR ." ]
[ "weighted version", "lemma" ]
background
{ "title": "Edge Agreement of Multi-agent System with Quantized Measurements via Directed Edge Laplacian", "abstract": "This work explores the edge agreement problem of the second-order nonlinear multiagent system under quantized measurements. To begin with, the general concepts of weighted edge Laplacian of directed graph are proposed and its algebraic properties are further explored. Based on the essential edge Laplacian, we derive a model reduction representation of the closed-loop multi-agent system based on the spanning tree subgraph. Meanwhile, the edge agreement problem of second-order nonlinear multi-agent system under quantized effects is studied, in which both uniform and logarithmic quantizers are considered. Particularly, for the uniform quantizers, we provide the upper bound of the radius of the agreement neighborhood, which indicates that the radius increases with the quantization interval. While for the logarithmic quantizers, the agents converge exponentially to the desired agreement equilibrium. Additionally, we also provide the estimates of the convergence rate as well as indicate that the coarser the quantizer is, the slower the convergence speed. Finally, simulation results are given to verify the theoretical analysis." }
{ "title": "Convergence Analysis using the Edge Laplacian: Robust Consensus of Nonlinear Multi-agent Systems via ISS Method", "abstract": "This study develops an original and innovative matrix representation with respect to the information flow for networked multi-agent system. To begin with, the general concepts of the edge Laplacian of digraph are proposed with its algebraic properties. Benefit from this novel graph-theoretic tool, we can build a bridge between the consensus problem and the edge agreement problem; we also show that the edge Laplacian sheds a new light on solving the leaderless consensus problem. Based on the edge agreement framework, the technical challenges caused by unknown but bounded disturbances and inherently nonlinear dynamics can be well handled. In particular, we design an integrated procedure for a new robust consensus protocol that is based on a blend of algebraic graph theory and the newly developed cyclic-small-gain theorem. Besides, to highlight the intricate relationship between the original graph and cyclic-small-gain theorem, the concept of edge-interconnection graph is introduced for the first time. Finally, simulation results are provided to verify the theoretical analysis." }
1910.09040
1607.03483
Seed-set expansion based on LPs
An important question that arises in seed-set expansion is how to choose the weights of the GPR in order to insure near-optimal or optimal classification #REFR .
[ "Consequently, thresholding properly combined LP values may allow for classifying vertices as being inside or outside of the community.", "Formally, each vertex v in a hypergraph G(V, E) is associated with a vector of LPs (x", "The GPRs of vertices are compared to a threshold to determine whether they belong to the community of interest.", "Consequently, GPRs lead to linear classifiers that use LPs as vertex features.", "The above described GPR formulation includes Personalized PR (PPR) #OTHEREFR , where γ k = (1 − α)α k , and heat-kernal PR (HPR) #OTHEREFR , where γ k = e −h h k /k!, for properly chosen α, h." ]
[ "To this end, start with a partition into two communities V 0 , V 1 of V .", "Let a = (a (0) , a #OTHEREFR , ...) denote the arithmetic mean (centroid) of the LPs of vertices v ∈ V 0 , #OTHEREFR , ...) denote the arithmetic mean (centroid) of the LPs of ver-", "v .", "If the only available information about the distribution of the LPs are a and b, a discriminant with weights γ k = a (k) −b (k) is optimal since the deterministic boundary is orthogonal to the line that connects the centroids of the two communities. Klouman et al.", "#OTHEREFR observed that for community detection over graphs generated by standard SBMs #OTHEREFR , such a discriminant corresponds to PPR with an adequately chosen parameter α." ]
[ "seed-set expansion" ]
background
{ "title": "Landing Probabilities of Random Walks for Seed-Set Expansion in Hypergraphs", "abstract": "We describe the first known mean-field study of landing probabilities for random walks on hypergraphs. In particular, we examine clique-expansion and tensor methods and evaluate their mean-field characteristics over a class of random hypergraph models for the purpose of seed-set community expansion. We describe parameter regimes in which the two methods outperform each other and propose a hybrid expansion method that uses partial clique-expansion to reduce the projection distortion and low-complexity tensor methods applied directly on the partially expanded hypergraphs. 1" }
{ "title": "Block Models and Personalized PageRank", "abstract": "Methods for ranking the importance of nodes in a network have a rich history in machine learning and across domains that analyze structured data. Recent work has evaluated these methods though the seed set expansion problem: given a subset S of nodes from a community of interest in an underlying graph, can we reliably identify the rest of the community? We start from the observation that the most widely used techniques for this problem, personalized PageRank and heat kernel methods, operate in the space of landing probabilities of a random walk rooted at the seed set, ranking nodes according to weighted sums of landing probabilities of different length walks. Both schemes, however, lack an a priori relationship to the seed set objective. In this work we develop a principled framework for evaluating ranking methods by studying seed set expansion applied to the stochastic block model. We derive the optimal gradient for separating the landing probabilities of two classes in a stochastic block model, and find, surprisingly, that under reasonable assumptions the gradient is asymptotically equivalent to personalized PageRank for a specific choice of the PageRank parameter α that depends on the block model parameters. This connection provides a novel formal motivation for the success of personalized PageRank in seed set expansion and node ranking generally. We use this connection to propose more advanced techniques incorporating higher moments of landing probabilities; our advanced methods exhibit greatly improved performance despite being simple linear classification rules, and are even competitive with belief propagation." }
1910.09040
1607.03483
Concentration results
The result is also consistent with the finding for the special case d = 2 described in #REFR .
[ "The mean-field of the LPs for the d-hSBM(n, p, q) model in the clique-expansion setting is described in the following theorem. ce . Then for all k ≥ 0 we havē", "whereā,b satisfy the following recurrence relation", "Remark 3.1. The eigenvalue decomposition leads tō", "This result reveals that the geometric discriminant under the d-hSBM(n, p, q) is of the same form as that of PPR with parameter α = p−q p+(2 d−1 −1)q ." ]
[ "Next we show that the geometric centroids of LPs of clique-expansion RWoHs will asymptotically concentrate around their mean-field counterparts, which establishes consistency of the mean-field analysis.", "v;ce be the LPs of a clique-expansion RWoHs on G (ce) satisfying #OTHEREFR .", "Also assume that n d−1 q 2 log n → ∞.", "Then, for any constant > 0, n sufficiently large and a bounded constant k ≥ 0, one has", "with probability at least 1 − o(1)." ]
[ "finding" ]
result
{ "title": "Landing Probabilities of Random Walks for Seed-Set Expansion in Hypergraphs", "abstract": "We describe the first known mean-field study of landing probabilities for random walks on hypergraphs. In particular, we examine clique-expansion and tensor methods and evaluate their mean-field characteristics over a class of random hypergraph models for the purpose of seed-set community expansion. We describe parameter regimes in which the two methods outperform each other and propose a hybrid expansion method that uses partial clique-expansion to reduce the projection distortion and low-complexity tensor methods applied directly on the partially expanded hypergraphs. 1" }
{ "title": "Block Models and Personalized PageRank", "abstract": "Methods for ranking the importance of nodes in a network have a rich history in machine learning and across domains that analyze structured data. Recent work has evaluated these methods though the seed set expansion problem: given a subset S of nodes from a community of interest in an underlying graph, can we reliably identify the rest of the community? We start from the observation that the most widely used techniques for this problem, personalized PageRank and heat kernel methods, operate in the space of landing probabilities of a random walk rooted at the seed set, ranking nodes according to weighted sums of landing probabilities of different length walks. Both schemes, however, lack an a priori relationship to the seed set objective. In this work we develop a principled framework for evaluating ranking methods by studying seed set expansion applied to the stochastic block model. We derive the optimal gradient for separating the landing probabilities of two classes in a stochastic block model, and find, surprisingly, that under reasonable assumptions the gradient is asymptotically equivalent to personalized PageRank for a specific choice of the PageRank parameter α that depends on the block model parameters. This connection provides a novel formal motivation for the success of personalized PageRank in seed set expansion and node ranking generally. We use this connection to propose more advanced techniques incorporating higher moments of landing probabilities; our advanced methods exhibit greatly improved performance despite being simple linear classification rules, and are even competitive with belief propagation." }
1910.09040
1607.03483
Construction of GPR based on landing probabilities
Following #REFR , the geometric discriminant of interest equals w T x v , where x v is the landing probability vector of the vertex v.
[ "In what follows, we use the results of our theoretical results to propose new GPR methods for hypergraph clustering." ]
[ "If only the first moments of the LPs are available, the optimal choice of w corresponding to the maximal marginal separator of the centroids is given in Theorems 3.2 and 3.4 for cliqueexpansion RWoHs and tensor RWoHs, respectively.", "The geometric discriminant only takes the first-order moments of LPs into account.", "As pointed out in #OTHEREFR , the Fisher discriminant is expected to have better classification performance since it also make use of the covariances of LPs (see Figure 3 ). More precisely, the Fisher discriminant takes the form", "x v is the landing probability vector of the vertex v and Σ is the covariance matrix of the landing probability vector.", "The authors of #OTHEREFR empirically leveraged the information about the second-order moments of LPs." ]
[ "vertex", "landing probability vector" ]
background
{ "title": "Landing Probabilities of Random Walks for Seed-Set Expansion in Hypergraphs", "abstract": "We describe the first known mean-field study of landing probabilities for random walks on hypergraphs. In particular, we examine clique-expansion and tensor methods and evaluate their mean-field characteristics over a class of random hypergraph models for the purpose of seed-set community expansion. We describe parameter regimes in which the two methods outperform each other and propose a hybrid expansion method that uses partial clique-expansion to reduce the projection distortion and low-complexity tensor methods applied directly on the partially expanded hypergraphs. 1" }
{ "title": "Block Models and Personalized PageRank", "abstract": "Methods for ranking the importance of nodes in a network have a rich history in machine learning and across domains that analyze structured data. Recent work has evaluated these methods though the seed set expansion problem: given a subset S of nodes from a community of interest in an underlying graph, can we reliably identify the rest of the community? We start from the observation that the most widely used techniques for this problem, personalized PageRank and heat kernel methods, operate in the space of landing probabilities of a random walk rooted at the seed set, ranking nodes according to weighted sums of landing probabilities of different length walks. Both schemes, however, lack an a priori relationship to the seed set objective. In this work we develop a principled framework for evaluating ranking methods by studying seed set expansion applied to the stochastic block model. We derive the optimal gradient for separating the landing probabilities of two classes in a stochastic block model, and find, surprisingly, that under reasonable assumptions the gradient is asymptotically equivalent to personalized PageRank for a specific choice of the PageRank parameter α that depends on the block model parameters. This connection provides a novel formal motivation for the success of personalized PageRank in seed set expansion and node ranking generally. We use this connection to propose more advanced techniques incorporating higher moments of landing probabilities; our advanced methods exhibit greatly improved performance despite being simple linear classification rules, and are even competitive with belief propagation." }
1910.09040
1607.03483
Construction of GPR based on landing probabilities
The authors of #REFR empirically leveraged the information about the second-order moments of LPs.
[ "Following #OTHEREFR , the geometric discriminant of interest equals w T x v , where x v is the landing probability vector of the vertex v.", "If only the first moments of the LPs are available, the optimal choice of w corresponding to the maximal marginal separator of the centroids is given in Theorems 3.2 and 3.4 for cliqueexpansion RWoHs and tensor RWoHs, respectively.", "The geometric discriminant only takes the first-order moments of LPs into account.", "As pointed out in #OTHEREFR , the Fisher discriminant is expected to have better classification performance since it also make use of the covariances of LPs (see Figure 3 ). More precisely, the Fisher discriminant takes the form", "x v is the landing probability vector of the vertex v and Σ is the covariance matrix of the landing probability vector." ]
[ "They showed that the Fisher discriminant has a performance that nearly matches that of belief propagation, the statistically optimal method for community detection on SBM [36, 37, 38].", "We therefore turn our attention to Fisher discriminant corresponding to clique-expansion and tensor RWoHs.", "Recall that our theoretical results shows that tensor RWoHs lead to larger centroid distances compared to those of clique-expansion RWoHs.", "Most importantly, the difference between the centroid distances of the two methods increase with the hyperedge size d.", "Hence, for large hyperedges sizes the theoretical results suggest that one should not directly use cliqueexpansion combined with PR methods." ]
[ "second-order moments", "information" ]
background
{ "title": "Landing Probabilities of Random Walks for Seed-Set Expansion in Hypergraphs", "abstract": "We describe the first known mean-field study of landing probabilities for random walks on hypergraphs. In particular, we examine clique-expansion and tensor methods and evaluate their mean-field characteristics over a class of random hypergraph models for the purpose of seed-set community expansion. We describe parameter regimes in which the two methods outperform each other and propose a hybrid expansion method that uses partial clique-expansion to reduce the projection distortion and low-complexity tensor methods applied directly on the partially expanded hypergraphs. 1" }
{ "title": "Block Models and Personalized PageRank", "abstract": "Methods for ranking the importance of nodes in a network have a rich history in machine learning and across domains that analyze structured data. Recent work has evaluated these methods though the seed set expansion problem: given a subset S of nodes from a community of interest in an underlying graph, can we reliably identify the rest of the community? We start from the observation that the most widely used techniques for this problem, personalized PageRank and heat kernel methods, operate in the space of landing probabilities of a random walk rooted at the seed set, ranking nodes according to weighted sums of landing probabilities of different length walks. Both schemes, however, lack an a priori relationship to the seed set objective. In this work we develop a principled framework for evaluating ranking methods by studying seed set expansion applied to the stochastic block model. We derive the optimal gradient for separating the landing probabilities of two classes in a stochastic block model, and find, surprisingly, that under reasonable assumptions the gradient is asymptotically equivalent to personalized PageRank for a specific choice of the PageRank parameter α that depends on the block model parameters. This connection provides a novel formal motivation for the success of personalized PageRank in seed set expansion and node ranking generally. We use this connection to propose more advanced techniques incorporating higher moments of landing probabilities; our advanced methods exhibit greatly improved performance despite being simple linear classification rules, and are even competitive with belief propagation." }
1806.07640
1607.03483
Introduction
The authors of #REFR have carried out their analysis in dense stochastic block models when the edge probabilities are fixed, i.e., they do not scale with the size of the graph.
[ "On the other hand, the analysis of Personalized PageRank on undirected random graph models is more difficult because a simple random walk on an undirected graph can pass through an edge in both directions, thus creating many short cycles and loops.", "To the best of our knowledge, #OTHEREFR is the only work studying Personalized PageRank on undirected Erdős-Rényi (ER) random graphs and stochastic block models.", "For the analysis of #OTHEREFR to hold, the personalization vector or the restart distribution has to be sufficiently delocalized.", "In #OTHEREFR a mean-field model for the standard PageRank has been proposed without a formal justification.", "In the recent work #OTHEREFR a mean-field model has been proposed for a modification of Personalized PageRank where the contributions from all paths are same." ]
[ "In the present work we analyze Personalized PageRank with a localized restart distribution.", "As a graph model, we consider an ER random graph with a smaller denser ER graph planted within.", "We establish conditions for concentration and non-concentration of PPR under different scaling laws of the edge probabilities.", "In particular, we show that when the graph is not too sparse there is a concentration to the mean field model of PPR when the size of subgraph scales linearly and the number of seeds scales sufficiently fast with the graph size.", "In other words, we establish sufficient conditions for the convergence of PPR to its mean-field form in medium dense graphs." ]
[ "dense stochastic block" ]
background
{ "title": "Mean Field Analysis of Personalized PageRank with Implications for Local Graph Clustering", "abstract": "We analyse a mean-field model of Personalized PageRank on the Erdős-Rényi random graph containing a denser planted Erdős-Rényi subgraph. We investigate the regimes where the values of Personalized PageRank concentrate around the mean-field value. We also study the optimization of the damping factor, the only parameter in Personalized PageRank. Our theoretical results help to understand the applicability of Personalized PageRank and its limitations for local graph clustering." }
{ "title": "Block Models and Personalized PageRank", "abstract": "Methods for ranking the importance of nodes in a network have a rich history in machine learning and across domains that analyze structured data. Recent work has evaluated these methods though the seed set expansion problem: given a subset S of nodes from a community of interest in an underlying graph, can we reliably identify the rest of the community? We start from the observation that the most widely used techniques for this problem, personalized PageRank and heat kernel methods, operate in the space of landing probabilities of a random walk rooted at the seed set, ranking nodes according to weighted sums of landing probabilities of different length walks. Both schemes, however, lack an a priori relationship to the seed set objective. In this work we develop a principled framework for evaluating ranking methods by studying seed set expansion applied to the stochastic block model. We derive the optimal gradient for separating the landing probabilities of two classes in a stochastic block model, and find, surprisingly, that under reasonable assumptions the gradient is asymptotically equivalent to personalized PageRank for a specific choice of the PageRank parameter α that depends on the block model parameters. This connection provides a novel formal motivation for the success of personalized PageRank in seed set expansion and node ranking generally. We use this connection to propose more advanced techniques incorporating higher moments of landing probabilities; our advanced methods exhibit greatly improved performance despite being simple linear classification rules, and are even competitive with belief propagation." }
1811.10797
1607.03483
Spectral multi-length embeddings
According to #REFR , for SPD matrices, the SVD is identical to the eigenvalue decomposition (EVD).
[ "While any symmetric S that obeys (4) can be used for constructing multi-length similarities (cf.", "(5)), certain desirable properties may materialize by properly designing S. We begin by recalling the following identity", "where P + N denotes the space of N × N symmetric positive definite (SPD) matrices, and Λ is the diagonal matrix that contains the eigenvalues of S sorted in decreasing order." ]
[ "Thus, if S ∈ P + N , the solution to (2) is also given as (cf. (8))", "where U d are also the first d eigenvectors of S, and", "is the K−order polynomial of its eigenvalues defined by θ.", "Consider now that we specify S to be", "Clearly, (11) is SPD; this follows upon recalling that" ]
[ "eigenvalue decomposition", "SVD" ]
background
{ "title": "Adaptive-similarity node embedding for scalable learning over graphs.", "abstract": "Abstract-Node embedding is the task of extracting informative and descriptive features over the nodes of a graph. The importance of node embeddings for graph analytics, as well as learning tasks such as node classification, link prediction and community detection, has led to increased interest on the problem leading to a number of recent advances. Still, node embedding is faced with a several important challenges. Practical node embedding methods are required to cope with real-world graphs that arise from a variety of different domains, with inherently diverse underlying processes and similarity structures. On the other hand, much like PCA in the feature domain, node embedding is an inherently unsupervised task; in lack of metadata used for validation, practical methods may require standardization and limiting the use of tunable hyperparameters. Finally, node embedding methods are faced with maintaining scalability in the face of large-scale real-world graphs of ever-increasing sizes. In the present work, we propose an adaptive node embedding framework that adjusts the embedding process to a given underlying graph, in a fully unsupervised manner. To achieve this, we adopt the notion of a tunable node similarity matrix that assigns weights on paths of different length. The design of the multilength similarities ensures that the resulting embeddings also inherit interpretable spectral properties. The proposed model is carefully studied, interpreted, and numerically evaluated using stochastic block models. Moreover, an algorithmic scheme is proposed for training the model parameters effieciently and in an unsupervised manner. We perform extensive node classification, link prediction, and clustering experiments on many real world graphs from various domains, and compare with state-of-the-art scalable and unsupervised node embedding alternatives. The proposed method enjoys superior performance in many cases, while also yielding interpretable information on the underlying structure of the graph." }
{ "title": "Block Models and Personalized PageRank", "abstract": "Methods for ranking the importance of nodes in a network have a rich history in machine learning and across domains that analyze structured data. Recent work has evaluated these methods though the seed set expansion problem: given a subset S of nodes from a community of interest in an underlying graph, can we reliably identify the rest of the community? We start from the observation that the most widely used techniques for this problem, personalized PageRank and heat kernel methods, operate in the space of landing probabilities of a random walk rooted at the seed set, ranking nodes according to weighted sums of landing probabilities of different length walks. Both schemes, however, lack an a priori relationship to the seed set objective. In this work we develop a principled framework for evaluating ranking methods by studying seed set expansion applied to the stochastic block model. We derive the optimal gradient for separating the landing probabilities of two classes in a stochastic block model, and find, surprisingly, that under reasonable assumptions the gradient is asymptotically equivalent to personalized PageRank for a specific choice of the PageRank parameter α that depends on the block model parameters. This connection provides a novel formal motivation for the success of personalized PageRank in seed set expansion and node ranking generally. We use this connection to propose more advanced techniques incorporating higher moments of landing probabilities; our advanced methods exhibit greatly improved performance despite being simple linear classification rules, and are even competitive with belief propagation." }
1605.09781
1210.0866
Related Work
In particular, the authors would like to underline the similarities of this study with a recent work of Adcock, Rubin and Carlsson #REFR .
[ "A real comparison with other research groups or with a competing commercial software is difficult to assess.", "Algorithms based on persistent homology have been used by other research groups in different applications." ]
[ "In this paper, persistent homology has been used to classify hepatic (liver) lesions using multidimensional persistent homology.", "Indeed, the intensity filtration used there is the same filtering function used here for the three color channels and the idea of the border filtration used there is very similar to the filtering function for the boundary defined in this work.", "However, apart from the application field, there are important technical differences between the two systems as well: in #OTHEREFR , the entire image serves as simplicial complex, whereas here only the segmented part of the skin lesion is considered.", "Moreover, here the RGB images led the authors to consider a much larger number of filtering functions.", "Even more importantly, the classification approaches differ in a significant way: in the case of #OTHEREFR , a support vector machine method is used to classify the images, whereas in this case a k-NN method is chosen, with the final aim to replace the classification result with the retrieval itself." ]
[ "similarities" ]
result
{ "title": "A Feasibility Study for a Persistent Homology-Based k-Nearest Neighbor Search Algorithm in Melanoma Detection", "abstract": "Abstract Persistent homology is a fairly new branch of computational topology which combines geometry and topology for an effective shape description of use in Pattern Recognition. In particular, it registers through \"Betti Numbers\" the presence of holes and their persistence while a parameter (\"filtering function\") is varied. In this paper, some recent developments in this field are integrated in a k-nearest neighbor search algorithm suited for an automatic retrieval of melanocytic lesions. Since long, dermatologists use five morphological parameters (A = asymmetry, B = boundary, C = color, D = diameter, E = evolution) for assessing the malignancy of a lesion. The algorithm is based on a qualitative assessment of the segmented images by computing both 1 and 2-dimensional persistent Betti Number functions related to the ABCDE parameters and to the internal texture of the lesion. The results of a feasibility test on a set of 107 melanocytic lesions are reported in the section dedicated to the numerical experiments. B Ivan Tomba" }
{ "title": "Classification of Hepatic Lesions using the Matching Metric", "abstract": "In this paper we present a methodology of classifying hepatic (liver) lesions using multidimensional persistent homology, the matching metric (also called the bottleneck distance), and a support vector machine. We present our classification results on a dataset of 132 lesions that have been outlined and annotated by radiologists. We find that topological features are useful in the classification of hepatic lesions. We also find that two-dimensional persistent homology outperforms one-dimensional persistent homology in this application." }
1907.08276
1807.10446
Introduction
Basing on validation by using the 127 Trojan samples collected from a real-world banking environment in the UK #REFR .
[ "Since the top banking botnets and takedown efforts in 2014 and 2015, researchers #OTHEREFR have observed cybercriminals learning from past experience and quickly adapting to more sophisticated technologies commonly seen in advanced persistent threat (APT) attacks.", "Instead of stealing credentials by infecting banking customer's computers, cybercriminals as reported in #OTHEREFR directly targeted organizations, banking networks using APT techniques for financial gain as opposed to espionage.", "Financial organizations have been utilizing Cyber Kill Chain (CKC) Taxonomies to support defensive and investigative tactics for analysts and experts in organizations to perform their day-to-day tasks 32nd Conference on Neural Information Processing Systems (NIPS 2018), Montréal, Canada. against APT.", "CKC is based on the kill chain tactic of the US military's F2T2EA (Find, Fix, Track, Target, Engage and Assess) #OTHEREFR and Cyber Kill Chain (CKC) is one of the most widely used operational threat intelligence models to explain intrusion campaign activities in seven stages.", "Authors in #OTHEREFR further proposed a CKC-based taxonomy for Banking Trojans in supporting security experts on the banking/financial industry sector." ]
[ "The steps below describe in greater detail how APT-based banking trojan typically works:", "1.", "Reconnaissance and Weaponization: Gathering information and preparation of an attack.", "Using Carbanak APT #OTHEREFR as an example, Cybercriminals registered new spoofing domains to impersonate a legitimate software or tech company in later spear phishing emails claiming required software update.", "2." ]
[ "127 Trojan samples" ]
method
{ "title": "An AI-based, Multi-stage detection system of banking botnets", "abstract": "Banking Trojans, botnets are primary drivers of financially-motivated cybercrime. In this paper, we first analyzed how an APT-based banking botnet works step by step through the whole lifecycle. Specifically, we present a multi-stage system that detects malicious banking botnet activities which potentially target the organizations. The system leverages Cyber Data Lakes as well as multiple artificial intelligence techniques at different stages. The evaluation results using public datasets showed that Deep Learning based detections were highly successful compared with baseline models." }
{ "title": "A Cyber Kill Chain Based Taxonomy of Banking Trojans for Evolutionary Computational Intelligence", "abstract": "Malware such as banking Trojans are popular with financially-motivated cybercriminals. Detection of banking Trojans remains a challenging task, due to the constant evolution of techniques used to obfuscate and circumvent existing detection and security solutions. Having a malware taxonomy can facilitate the design of mitigation strategies such as those based on evolutionary computational intelligence. Specifically, in this paper, we propose a cyber kill chain based taxonomy of banking Trojans features. This threat intelligence based taxonomy providing a stage-by-stage operational understanding of a cyber-attack, can be highly beneficial to security practitioners and the design of evolutionary computational intelligence on Trojans detection and mitigation strategy. The proposed taxonomy is validated by using a real-world dataset of 127 banking Trojans" }
1908.09590
1809.05807
Comparisons with models in the literature
DUPMN #REFR ) also uses a hierarchical LSTM as base model and incorporates attributes as two separate deep memory network, one for each attribute.
[ "2.", "UPDMN #OTHEREFR uses an LSTM classifier as base model and incorporates attributes as a separate deep memory network that uses other related documents as memory.", "3.", "NSC #OTHEREFR ) uses a hierarchical LSTM classifier as base model and incorporates attributes using the bias-attention method on both word-and sentence-level LSTMs.", "4." ]
[ "5.", "PMA #OTHEREFR ) is similar to NSC but uses external features such as the ranking preference method of a specific user.", "6.", "HCSC (Amplayo et al., 2018a) uses a combination of BiLSTM and CNN as base model, incorporates attributes using the biasattention method, and also considers the existence of cold start entities.", "7." ]
[ "hierarchical LSTM", "memory network" ]
method
{ "title": "Rethinking Attribute Representation and Injection for Sentiment Classification", "abstract": "Text attributes, such as user and product information in product reviews, have been used to improve the performance of sentiment classification models. The de facto standard method is to incorporate them as additional biases in the attention mechanism, and more performance gains are achieved by extending the model architecture. In this paper, we show that the above method is the least effective way to represent and inject attributes. To demonstrate this hypothesis, unlike previous models with complicated architectures, we limit our base model to a simple BiLSTM with attention classifier, and instead focus on how and where the attributes should be incorporated in the model. We propose to represent attributes as chunk-wise importance weight matrices and consider four locations in the model (i.e., embedding, encoding, attention, classifier) to inject attributes. Experiments show that our proposed method achieves significant improvements over the standard approach and that attention mechanism is the worst location to inject attributes, contradicting prior work. We also outperform the state-of-the-art despite our use of a simple base model. Finally, we show that these representations transfer well to other tasks 1 ." }
{ "title": "Dual Memory Network Model for Biased Product Review Classification", "abstract": "In sentiment analysis (SA) of product reviews, both user and product information are proven to be useful. Current tasks handle user profile and product information in a unified model which may not be able to learn salient features of users and products effectively. In this work, we propose a dual user and product memory network (DUPMN) model to learn user profiles and product reviews using separate memory networks. Then, the two representations are used jointly for sentiment prediction. The use of separate models aims to capture user profiles and product information more effectively. Compared to state-of-theart unified prediction models, the evaluations on three benchmark datasets, IMDB, Yelp13, and Yelp14, show that our dual learning model gives performance gain of 0.6%, 1.2%, and 0.9%, respectively. The improvements are also deemed very significant measured by p-values." }
1701.07518
0909.2622
IV. COMPOUND SECRECY CAPACITY WITH EAVESDROPPER MEAN UNCERTAINTY
For this particular scenario, q ns takes the same form as in #REFR , yet, with u e = u * .
[ "Proof.", "Follows immediately by Theorem 1 while realizing that the worst eavesdropper mean channel is in the direction u * .", "Corollary 1 extends Theorem 1 to the case of uncertainty about the eavesdropper mean channel direction.", "We can conclude that, since the transmitter does not know the eavesdropper mean channel, it design its signal assuming the worst eavesdropper mean channel.", "Again, we note that, NS beamforming still can be introduced as an alternative solution against an eavesdropper with mean direction uncertainty." ]
[]
[ "form", "u" ]
background
{ "title": "On the compound MIMO wiretap channel with mean feedback", "abstract": "Compound MIMO wiretap channel with double sided uncertainty is considered under channel mean information model. In mean information model, channel variations are centered around its mean value which is fed back to the transmitter. We show that the worst case main channel is anti-parallel to the channel mean information resulting in an overall unit rank channel. Further, the worst eavesdropper channel is shown to be isotropic around its mean information. Accordingly, we provide the capacity achieving beamforming direction. We show that the saddle point property holds under mean information model and, thus, compound secrecy capacity equals to the worst case capacity over the class of uncertainty. Moreover, capacity achieving beamforming direction is found to require matrix inversion, thus, we derive null steering (NS) beamforming as an alternative sub-optimal solution that precludes the necessity of matrix inversion. NS beamformer is the beamforming direction orthogonal to the eavesdropper mean channel that maintains the maximum possible gain in the direction mean main channel. Extensive computer simulation reveals that NS beamforming performs very close to the optimal solution. It also verifies that, NS beamforming outperforms both maximum ratio transmission (MRT) and zero forcing (ZF) beamforming approaches over the entire SNR range. Finally, an equivalence relation with MIMO wiretap channel in Rician fading environment is established." }
{ "title": "Transmitter Optimization for Achieving Secrecy Capacity in Gaussian MIMO Wiretap Channels", "abstract": "We consider a Gaussian multiple-input multiple-output (MIMO) wiretap channel model, where there exists a transmitter, a legitimate receiver and an eavesdropper, each node equipped with multiple antennas. We study the problem of finding the optimal input covariance matrix that achieves secrecy capacity subject to a power constraint, which leads to a non-convex optimization problem that is in general difficult to solve. Existing results for this problem address the case in which the transmitter and the legitimate receiver have two antennas each and the eavesdropper has one antenna. For the general cases, it has been shown that the optimal input covariance matrix has low rank when the difference between the Grams of the eavesdropper and the legitimate receiver channel matrices is indefinite or semi-definite, while it may have low rank or full rank when the difference is positive definite. In this paper, the aforementioned nonconvex optimization problem is investigated. In particular, for the multiple-input single-output (MISO) wiretap channel, the optimal input covariance matrix is obtained in closed form. For general cases, we derive the necessary conditions for the optimal input covariance matrix consisting of a set of equations. For the case in which the transmitter has two antennas, the derived necessary conditions can result in a closed form solution; For the case in which the difference between the Grams is indefinite and has all negative eigenvalues except one positive eigenvalue, the optimal input covariance matrix has rank one and can be obtained in closed form; For other cases, the solution is proved to be a fixed point of a mapping from a convex set to itself and an iterative procedure is provided to search for it. Numerical results are presented to illustrate the proposed theoretical findings. Index Terms" }
1611.00044
0909.2622
This strengthens the earlier result in #REFR (transmission on non-negative rather than positive directions).
[ "In particular, the optimal covariance does not converge to a scaled identity in the high-SNR case and thus isotropic signaling is sub-optimal in this regime.", "Theorem 1, in combination with the rank-1 solution, provides the complete characterization of the optimal covariance for the case of two transmit antennas (for any channel, degraded or not).", "The cases of high-SNR and of weak eavesdropper are elaborated in Corollaries 1 and 2.", "An optimal covariance matrix for the general case (degraded or not) is characterized in Proposition 2, which shows that there is hidden convexity in the respective optimization problem, even when the channel is not degraded.", "Proposition 3 gives a necessary condition of optimality for the general case, which is a transmission of the positive directions of the difference channel where the main channel is stronger than the eavesdropper one." ]
[ "While the proof in #OTHEREFR is rather straightforward and is based on a singular transformation (multiplication by a matrix that is singular when the covariance matrix is rank-deficient) of the KKT conditions, significantly more effort and a new approach are required to establish the stronger result.", "It avoids using a singular transformation (since some information about active signalling sub-space is irreversibly lost in the process) but relies on a novel property of positive semi-definite matrices (Lemma 2) and their block-partitioned representation to establish a property of dual variables from which the desired result follow.", "This result also allows one to establish a tighter bound on the rank of an optimal covariance matrix (Corollary 3) than those available in the literature for the general case.", "A lower bound on the secrecy capacity in the general case is established in Proposition 4.", "While the original problem is non-convex so that all powerful tools of convex optimization #OTHEREFR cannot be used, the lower bound is expressed via a convex problem and thus can be solved efficiently by a numerical algorithm." ]
[ "positive directions" ]
result
{ "title": "Optimal Signaling for Secure Communications Over Gaussian MIMO Wiretap Channels", "abstract": "Optimal signaling over the Gaussian multiple-input multiple-output wire-tap channel is studied under the total transmit power constraint. A closed-form solution for an optimal transmit covariance matrix is obtained when the channel is strictly degraded. In combination with the rank-1 solution, this provides the complete characterization of the optimal covariance for the case of two transmit antennas. The cases of weak eavesdropper and high SNR are considered. It is shown that the optimal covariance does not converge to a scaled identity in the high-SNR regime. Necessary optimality conditions and a tight upper bound on the rank of an optimal covariance matrix are established for the general case, along with a lower bound to the secrecy capacity, which is tight in a number of scenarios." }
{ "title": "Transmitter Optimization for Achieving Secrecy Capacity in Gaussian MIMO Wiretap Channels", "abstract": "We consider a Gaussian multiple-input multiple-output (MIMO) wiretap channel model, where there exists a transmitter, a legitimate receiver and an eavesdropper, each node equipped with multiple antennas. We study the problem of finding the optimal input covariance matrix that achieves secrecy capacity subject to a power constraint, which leads to a non-convex optimization problem that is in general difficult to solve. Existing results for this problem address the case in which the transmitter and the legitimate receiver have two antennas each and the eavesdropper has one antenna. For the general cases, it has been shown that the optimal input covariance matrix has low rank when the difference between the Grams of the eavesdropper and the legitimate receiver channel matrices is indefinite or semi-definite, while it may have low rank or full rank when the difference is positive definite. In this paper, the aforementioned nonconvex optimization problem is investigated. In particular, for the multiple-input single-output (MISO) wiretap channel, the optimal input covariance matrix is obtained in closed form. For general cases, we derive the necessary conditions for the optimal input covariance matrix consisting of a set of equations. For the case in which the transmitter has two antennas, the derived necessary conditions can result in a closed form solution; For the case in which the difference between the Grams is indefinite and has all negative eigenvalues except one positive eigenvalue, the optimal input covariance matrix has rank one and can be obtained in closed form; For other cases, the solution is proved to be a fixed point of a mapping from a convex set to itself and an iterative procedure is provided to search for it. Numerical results are presented to illustrate the proposed theoretical findings. Index Terms" }
1611.00044
0909.2622
While the proof in #REFR is rather straightforward and is based on a singular transformation (multiplication by a matrix that is singular when the covariance matrix is rank-deficient) of the KKT conditions, significantly more effort and a new approach are required to establish the stronger result.
[ "Theorem 1, in combination with the rank-1 solution, provides the complete characterization of the optimal covariance for the case of two transmit antennas (for any channel, degraded or not).", "The cases of high-SNR and of weak eavesdropper are elaborated in Corollaries 1 and 2.", "An optimal covariance matrix for the general case (degraded or not) is characterized in Proposition 2, which shows that there is hidden convexity in the respective optimization problem, even when the channel is not degraded.", "Proposition 3 gives a necessary condition of optimality for the general case, which is a transmission of the positive directions of the difference channel where the main channel is stronger than the eavesdropper one.", "This strengthens the earlier result in #OTHEREFR (transmission on non-negative rather than positive directions)." ]
[ "It avoids using a singular transformation (since some information about active signalling sub-space is irreversibly lost in the process) but relies on a novel property of positive semi-definite matrices (Lemma 2) and their block-partitioned representation to establish a property of dual variables from which the desired result follow.", "This result also allows one to establish a tighter bound on the rank of an optimal covariance matrix (Corollary 3) than those available in the literature for the general case.", "A lower bound on the secrecy capacity in the general case is established in Proposition 4.", "While the original problem is non-convex so that all powerful tools of convex optimization #OTHEREFR cannot be used, the lower bound is expressed via a convex problem and thus can be solved efficiently by a numerical algorithm.", "This bound is tight (achieved with equality) in a number of cases: when the SNR is low, or when the legitimate and eavesdropper channels have the same right singular vectors, or when the channel is degraded, thus providing an additional insight into optimal signalling." ]
[ "covariance matrix" ]
background
{ "title": "Optimal Signaling for Secure Communications Over Gaussian MIMO Wiretap Channels", "abstract": "Optimal signaling over the Gaussian multiple-input multiple-output wire-tap channel is studied under the total transmit power constraint. A closed-form solution for an optimal transmit covariance matrix is obtained when the channel is strictly degraded. In combination with the rank-1 solution, this provides the complete characterization of the optimal covariance for the case of two transmit antennas. The cases of weak eavesdropper and high SNR are considered. It is shown that the optimal covariance does not converge to a scaled identity in the high-SNR regime. Necessary optimality conditions and a tight upper bound on the rank of an optimal covariance matrix are established for the general case, along with a lower bound to the secrecy capacity, which is tight in a number of scenarios." }
{ "title": "Transmitter Optimization for Achieving Secrecy Capacity in Gaussian MIMO Wiretap Channels", "abstract": "We consider a Gaussian multiple-input multiple-output (MIMO) wiretap channel model, where there exists a transmitter, a legitimate receiver and an eavesdropper, each node equipped with multiple antennas. We study the problem of finding the optimal input covariance matrix that achieves secrecy capacity subject to a power constraint, which leads to a non-convex optimization problem that is in general difficult to solve. Existing results for this problem address the case in which the transmitter and the legitimate receiver have two antennas each and the eavesdropper has one antenna. For the general cases, it has been shown that the optimal input covariance matrix has low rank when the difference between the Grams of the eavesdropper and the legitimate receiver channel matrices is indefinite or semi-definite, while it may have low rank or full rank when the difference is positive definite. In this paper, the aforementioned nonconvex optimization problem is investigated. In particular, for the multiple-input single-output (MISO) wiretap channel, the optimal input covariance matrix is obtained in closed form. For general cases, we derive the necessary conditions for the optimal input covariance matrix consisting of a set of equations. For the case in which the transmitter has two antennas, the derived necessary conditions can result in a closed form solution; For the case in which the difference between the Grams is indefinite and has all negative eigenvalues except one positive eigenvalue, the optimal input covariance matrix has rank one and can be obtained in closed form; For other cases, the solution is proved to be a fixed point of a mapping from a convex set to itself and an iterative procedure is provided to search for it. Numerical results are presented to illustrate the proposed theoretical findings. Index Terms" }
1504.03725
0909.2622
I. INTRODUCTION
A weaker form of this result (non-negative instead of positive directions) has been obtained earlier in #REFR .
[ "The optimal transmit covariance matrix under the total power constraint has been obtained for some special cases, e.g.", "low/high SNR, multiple-input single-output (MISO) channels, fullrank, rank-1 or weak eavesdropper cases, or the parallel channel #OTHEREFR - #OTHEREFR , but the general case remains illusive.", "The main difficulty lies in the fact that the underlying optimization problem is in general not a convex problem.", "It was conjectured in #OTHEREFR and proved in #OTHEREFR using an indirect approach (via the degraded channel) that the optimal signaling is on the positive directions of the difference channel (where the legitimate channel is stronger than the eavesdropper one).", "A direct proof based on the necessary Karush-Kuhn-Tucker (KKT) optimality conditions has been obtained in #OTHEREFR ." ]
[ "In the general case, the rank of an optimal covariance matrix does not exceed the number of positive eigenvalues of the difference channel matrix #OTHEREFR .", "An exact full-rank solution for the optimal covariance has been obtained in #OTHEREFR and its properties have been characterized.", "In particular, unlike the regular channel (no eavesdropper), the optimal power allocation does not converge to uniform one at high SNR and the latter remains sub-optimal at any finite SNR.", "In the case of weak eavesdropper (its singular values are much smaller than those of the legitimate channel), the optimal signaling mimics the conventional one (water-filling over the channel eigenmodes) with an adjustment for the eavesdropper channel.", "The rank-one #OTHEREFR DRAFT solution in combination with the full-rank one provides a complete solution for the case of two transmit antennas and any number of receive/eavesdropper antennas." ]
[ "positive directions" ]
result
{ "title": "An Algorithm for Global Maximization of Secrecy Rates in Gaussian MIMO Wiretap Channels", "abstract": "Optimal signaling for secrecy rate maximization in Gaussian MIMO wiretap channels is considered. While this channel has attracted a significant attention recently and a number of results have been obtained, including the proof of the optimality of Gaussian signalling, an optimal transmit covariance matrix is known for some special cases only and the general case remains an open problem. An iterative custom-made algorithm to find a globally-optimal transmit covariance matrix in the general case is developed in this paper, with guaranteed convergence to a global optimum. While the original optimization problem is not convex and hence difficult to solve, its minimax reformulation can be solved via the convex optimization tools, which is exploited here. The proposed algorithm is based on the barrier method extended to deal with a minimax problem at hand. Its convergence to a global optimum is proved for the general case (degraded or not) and a bound for the optimality gap is given for each step of the barrier method. The performance of the algorithm is demonstrated via numerical examples. In particular, 20 to 40 Newton steps are already sufficient to solve the sufficient optimality conditions with very high precision (up to the machine precision level), even for large systems. Even fewer steps are required if the secrecy capacity is the only quantity of interest. The algorithm can be significantly simplified for the degraded channel case and can also be adopted to include the per-antenna power constraints (instead or in addition to the total power constraint). It also solves the dual problem of minimizing the total power subject to the secrecy rate constraint. S. Loyka is with the" }
{ "title": "Transmitter Optimization for Achieving Secrecy Capacity in Gaussian MIMO Wiretap Channels", "abstract": "We consider a Gaussian multiple-input multiple-output (MIMO) wiretap channel model, where there exists a transmitter, a legitimate receiver and an eavesdropper, each node equipped with multiple antennas. We study the problem of finding the optimal input covariance matrix that achieves secrecy capacity subject to a power constraint, which leads to a non-convex optimization problem that is in general difficult to solve. Existing results for this problem address the case in which the transmitter and the legitimate receiver have two antennas each and the eavesdropper has one antenna. For the general cases, it has been shown that the optimal input covariance matrix has low rank when the difference between the Grams of the eavesdropper and the legitimate receiver channel matrices is indefinite or semi-definite, while it may have low rank or full rank when the difference is positive definite. In this paper, the aforementioned nonconvex optimization problem is investigated. In particular, for the multiple-input single-output (MISO) wiretap channel, the optimal input covariance matrix is obtained in closed form. For general cases, we derive the necessary conditions for the optimal input covariance matrix consisting of a set of equations. For the case in which the transmitter has two antennas, the derived necessary conditions can result in a closed form solution; For the case in which the difference between the Grams is indefinite and has all negative eigenvalues except one positive eigenvalue, the optimal input covariance matrix has rank one and can be obtained in closed form; For other cases, the solution is proved to be a fixed point of a mapping from a convex set to itself and an iterative procedure is provided to search for it. Numerical results are presented to illustrate the proposed theoretical findings. Index Terms" }
1904.11481
1812.10455
B. Exogenous Update Arrivals
Note that when p 1 = 1, (24) reduces to the building block result in #REFR Theorem 1] .
[ "However, type I update interarrival to a node, S I , is now equal to S I = X k1:n +Z + j+M1−1", "In the following theorem, we determine the age of a type I update at an individual node when the update streams arrive exogenously at the source node.", "Theorem 2 Under the earliest k 1 and k 2 transmission scheme for type I and type II updates that arrive at the source node as Poisson processes with rates µ 1 and µ 2 , respectively, the average type I age at an individual node is", "where first and second moments of S I are as in (22) and (23).", "The proof of Theorem 2 follows accordingly from that of Theorem 1." ]
[ "By making the corresponding replacements as in Section III-A we can obtain the average age expression of type II update stream, ∆ II .", "When the service times of the packets of the same kind are i.i.d.", "shifted exponential random variables and n is large, we can further simplify (24) as follows.", "Corollary 2 For large n and n > k i we set k i = α i n for i = 1, 2.", "For shifted exponential transmission times X and X with parameters (λ, c) and (λ,c) for type I and type II updates, respectively, ∆ I can be approximated as" ]
[ "Theorem" ]
background
{ "title": "Age of Information in Multicast Networks with Multiple Update Streams", "abstract": "Abstract-We consider the age of information in a multicast network where there is a single source node that sends timesensitive updates to n receiver nodes. Each status update is one of two kinds: type I or type II. To study the age of information experienced by the receiver nodes for both types of updates, we consider two cases: update streams are generated by the source node at-will and update streams arrive exogenously to the source node. We show that using an earliest k1 and k2 transmission scheme for type I and type II updates, respectively, the age of information of both update streams at the receiver nodes can be made a constant independent of n. In particular, the source node transmits each type I update packet to the earliest k1 and each type II update packet to the earliest k2 of n receiver nodes. We determine the optimum k1 and k2 stopping thresholds for arbitrary shifted exponential link delays to individually and jointly minimize the average age of both update streams and characterize the pareto optimal curve for the two ages." }
{ "title": "Age of information in multihop multicast networks", "abstract": "We consider the age of information in a multihop multicast network where there is a single source node sending timesensitive updates to n L end nodes, and L denotes the number of hops. In the first hop, the source node sends updates to n first-hop receiver nodes, and in the second hop each first-hop receiver node relays the update packets that it has received to n further users that are connected to it. This network architecture continues in further hops such that each receiver node in hop is connected to n further receiver nodes in hop + 1. We study the age of information experienced by the end nodes, and in particular, its scaling as a function of n. We show that, using an earliest k transmission scheme in each hop, the age of information at the end nodes can be made a constant independent of n. In particular, the source node transmits each update packet to the earliest k 1 of the n first-hop nodes, and each first-hop node that receives the update relays it to the earliest k 2 out of n second-hop nodes that are connected to it and so on. We determine the optimum k stopping value for each hop for arbitrary shifted exponential link delays." }
1110.3672
cs/0305040
Corollary 1
In Appendix C we report tests of our approach for bounded model checking of DLTL formulas in the line of the LTL BMC experiments in #REFR .
[ "This formula is valid if its negation ✸¬(mail(b) ⊃ ✸¬mail(b)) is not satisfiable.", "We verify the satisfiability of this formula, by adding to the translation of the domain description the constraint ← not sat (ev(neg(impl(mail(b) , ev(neg(mail(b)))))), 0). and looking for an extension.", "The resulting set of rules indeed has extensions, which can be found for k ≥ 3 and provide counterexamples to the validity of the property above.", "For instance, the extension in which next(0, 1), next(1, 2), next(2, 3), next(3, 0), occurs(begin, 0), occurs(sense mail(a), 1), occurs(sense mail(b), 2), occurs(deliver mail(a), 3), where mail(b) holds in all states, and mail(a) only in states 2 and 3, can be obtained for k = 3.", "In Appendix B we provide the encoding of BMC and of Example 2 in the DLVComplex extension (https://www.mat.unical.it/dlv-complex) of DLV #OTHEREFR ." ]
[ "Results are provided for a DLV encoding of BMC and of action domain descriptions for the dining philosophers problems considered in that paper. The scalability of the two approaches is similar." ]
[ "bounded model checking" ]
method
{ "title": "Reasoning about Actions with Temporal Answer Sets", "abstract": "In this paper we combine Answer Set Programming (ASP) with Dynamic Linear Time Temporal Logic (DLTL) to define a temporal logic programming language for reasoning about complex actions and infinite computations. DLTL extends propositional temporal logic of linear time with regular programs of propositional dynamic logic, which are used for indexing temporal modalities. The action language allows general DLTL formulas to be included in domain descriptions to constrain the space of possible extensions. We introduce a notion of Temporal Answer Set for domain descriptions, based on the usual notion of Answer Set. Also, we provide a translation of domain descriptions into standard ASP and we use Bounded Model Checking techniques for the verification of DLTL constraints." }
{ "title": "Bounded LTL Model Checking with Stable Models", "abstract": "In this paper bounded model checking of asynchronous concurrent systems is introduced as a promising application area for answer set programming. As the model of asynchronous systems a generalisation of communicating automata, 1-safe Petri nets, are used. It is shown how a 1-safe Petri net and a requirement on the behaviour of the net can be translated into a logic program such that the bounded model checking problem for the net can be solved by computing stable models of the corresponding program. The use of the stable model semantics leads to compact encodings of bounded reachability and deadlock detection tasks as well as the more general problem of bounded model checking of linear temporal logic. Correctness proofs of the devised translations are given, and some experimental results using the translation and the Smodels system are presented." }
2003.02117
1910.13636
B. OP and ER
Based on results in #REFR , it is indicated that the diversity orders of all the NOMA users can be approximated to the number of RAs L for the I-RIS cases when the number of RISs is high enough.
[ "We then focus on the diversity orders of user k in cluster m, which can be obtained for evaluating the slope of OP.", "Proposition 1.", "From Theorem 1, the diversity orders for the I-RIS cases can be determined by expanding the lower incomplete Gamma function, and the diversity order of user k in cluster m of the proposed RIS-aided SCB design can be given by", "Proof: Please refer to Appendix B.", "Remark 5." ]
[ "We then turn our attention to the ER of user K in cluster m, which is a salient metric for performance analysis, and hence the approximated ER expressions for user K in cluster m is given in the following Theorem.", "Theorem 2.", "When the number of RISs N is sufficiently high, and α 2 v − K q=v+1 α 2 q ε v > 0 with v = 1, · · · , k, the ER of user K in cluster m can be expressed in the closed-form as follows:", "where C = Lσ 2", "Proof: Please refer to Appendix C." ]
[ "NOMA users" ]
result
{ "title": "MIMO-NOMA Networks Relying on Reconfigurable Intelligent Surface: A Signal Cancellation Based Design", "abstract": "Reconfigurable intelligent surface (RIS) technique stands as a promising signal enhancement or signal cancellation technique for next generation networks. We design a novel passive beamforming weight at RISs in a multiple-input multiple-output (MIMO) non-orthogonal multiple access (NOMA) network for simultaneously serving paired users, where a signal cancellation based (SCB) design is employed. In order to implement the proposed SCB design, we first evaluate the minimal required number of RISs in both the diffuse scattering and anomalous reflector scenarios. Then, new channel statistics are derived for characterizing the effective channel gains. In order to evaluate the network's performance, we derive the closed-form expressions both for the outage probability (OP) and for the ergodic rate (ER). The diversity orders as well as the high-signal-to-noise (SNR) slopes are derived for engineering insights. The network's performance of a finite resolution design has been evaluated. Our analytical results demonstrate that: i) the inter-cluster interference can be eliminated with the aid of large number of RIS elements; ii) the line-of-sight of the BS-RIS and RIS-user links are required for the diffuse scattering scenario, whereas the LoS links are not compulsory for the anomalous reflector scenario. ). However, as mentioned above, there were many constrains on the number of RAs in the previous designs. Recently, reconfigurable intelligent surface (RIS) technique stands as the next generation relay technique, also namely relay 2.0, received considerable attention due to its high EE [14]-[17]. The RIS elements are capable of independently shifting the signal phase and absorbing the signal energy, and hence the reflected signals can be boosted or diminished for wireless transmission [18]-[20]. By doing so, numerous application scenarios have been considered, e.g. RIS-aided coverage enhancement. The RIS elements are normally deployed on the building or on the wall [21]. A novel three-dimension design for aerial RIS network was proposed in [22], where RIS elements are employed at aerial platforms, and hence a full-angle reflection can be implemented. Currently, RIS networks are simply separated into two categories [23], i.e. anomalous reflector or diffuse scatterer for mmWave and sub-6G networks, respectively. The coverage distance is reduced in mmWave networks [24], and hence more users are located in coverage-holes compared to conventional networks. Thus, reflected signals can be aligned by RISs for serving users located in the coverage-holes [25]. NOMA and RIS techniques can be naturally integrated for enhancing both SE and EE. The RISs can be deployed for the cell-edge users in the NOMA networks, where the reflected signal cannot be received at the cell-center users [26] . An one-bit coding scheme was invoked in the RIS-aided NOMA networks, where imperfect SIC scenario was evaluated in [27] . Since both the BS and RISs are pre-deployed, and hence the line-of-sight (LoS) links between the BS and RISs are expected for improving desired signal power [28] . The Rician fading channels were utilized for illustrating the channel gain of both the BS-RIS and RIS-user links in [29] . A SISO-NOMA network was proposed in [30] , where a prioritized design was proposed for further enhancing the network's SE and EE. However, previous contributions mainly focus on the signal enhancement based (SEB) designs, where signals are boosted at the user side or at the BS side. Previous contributions mainly focus on the SEB designs, whilst there is a paucity of investigations on the signal-cancellation-based (SCB) design of the RIS-aided networks. Inspired by the concepts of the signal cancellation [31], [32], we propose a novel SCB design concept, which provides the desired degree of flexibility for the RIS-aided networks. In the MIMO-NOMA" }
{ "title": "Exploiting Intelligent Reflecting Surfaces in Multi-Antenna Aided NOMA Systems", "abstract": "This paper investigates a downlink multiple-input single-output intelligent reflecting surface (IRS) non-orthogonal multiple access (NOMA) system, where a base station (BS) serves multiple users with the aid of IRSs. Our goal is to maximize the sum rate of all users by jointly optimizing the active beamforming at the BS and the passive beamforming at the IRS, subject to successive interference cancellation decoding rate conditions and IRS reflecting elements constraints. In term of the characteristics of reflection amplitudes and phase shifts, we consider ideal and non-ideal IRS assumptions. To tackle the formulated non-convex problems, we propose efficient algorithms by invoking alternating optimization, which design the active beamforming and passive beamforming alternately. For the ideal IRS scenario, the two subproblems are solved by invoking the successive convex approximation technique. For the non-ideal IRS scenario, constant modulus IRS elements are further divided into continuous phase shifters and discrete phase shifters. To tackle the passive beamforming problem with continuous phase shifters, a novel algorithm is developed by utilizing the sequential rank-one constraint relaxation approach, which is guaranteed to find a locally optimal rank-one solution. Then, a quantization-based scheme is proposed for discrete phase shifters. Finally, numerical results illustrate that: i) the system sum rate can be significantly improved by deploying the IRS with our proposed algorithms; ii) 3-bit phase shifters are capable of achieving almost the same performance as the ideal IRS; iii) the proposed IRS-NOMA systems achieve higher system sum rate than the IRS-aided orthogonal multiple access system." }
1808.08509
1706.09077
Single Image Super Resolution
Various deep learning methods have been applied in the past, to solve the SISR problem, many of which have been summarized in #REFR . First, Dong et al.
[]
[ "proposed in #OTHEREFR the replacement of all steps to produce a high resolution imagefeature extraction then mapping then reconstruction -by a single neural network.", "The deep learning model performed better than other example-based methods.", "However, it was proposed in #OTHEREFR that deeper networks may not be effective for SISR. This was proved wrong by Kim et al. in #OTHEREFR .", "They used a very deep CNN model that performed better than #OTHEREFR . Kim et al.", "in #OTHEREFR used residual learning proposed by He et al." ]
[ "Various deep learning" ]
method
{ "title": "Efficient Single Image Super Resolution using Enhanced Learned Group Convolutions", "abstract": "Abstract. Convolutional Neural Networks (CNNs) have demonstrated great results for the single-image super-resolution (SISR) problem. Currently, most CNN algorithms promote deep and computationally expensive models to solve SISR. However, we propose a novel SISR method that uses relatively less number of computations. On training, we get group convolutions that have unused connections removed. We have refined this system specifically for the task at hand by removing unnecessary modules from original CondenseNet. Further, a reconstruction network consisting of deconvolutional layers has been used in order to upscale to high resolution. All these steps significantly reduce the number of computations required at testing time. Along with this, bicubic upsampled input is added to the network output for easier learning. Our model is named SRCondenseNet. We evaluate the method using various benchmark datasets and show that it performs favourably against the state-of-the-art methods in terms of both accuracy and number of computations required." }
{ "title": "Super-Resolution via Deep Learning", "abstract": "The recent phenomenal interest in convolutional neural networks (CNNs) must have made it inevitable for the super-resolution (SR) community to explore its potential. The response has been immense and in the last three years, since the advent of the pioneering work, there appeared too many works not to warrant a comprehensive survey. This paper surveys the SR literature in the context of deep learning. We focus on the three important aspects of multimedia -namely image, video and multi-dimensions, especially depth maps. In each case, first relevant benchmarks are introduced in the form of datasets and state of the art SR methods, excluding deep learning. Next is a detailed analysis of the individual works, each including a short description of the method and a critique of the results with special reference to the benchmarking done. This is followed by minimum overall benchmarking in the form of comparison on some common dataset, while relying on the results reported in various works." }
1904.10105
1701.05303
where
The type system presented in this section is essentially taken from Parys #REFR ; we have applied some cosmetic changes, though.
[ "Corollary 9.", "The following conditions are equivalent for a homogeneous and closed (potentially infinite) lambda-term M of sort o:", "• for every n ∈ N, in the tree generated by M there exists a branch with at least n appearances of the constant a, and • for every n ∈ N, there exists a derivation for ⊢ m M : (m, m, o) with (m + 1)-value at least n.", "Because the latter condition is easily decidable for lambda-terms represented by recursion schemes, the corollary implies decidability of the former condition.", "Bibliographic Note." ]
[ "In Parys #OTHEREFR the type system is extended to the task of counting multiple constants: the (m + 1)-value is not a number, but a tuple, where each coordinate of the tuple estimates the number of appearances of a particular constant.", "In particular, Corollary 9 is extended there to the property \"for every n ∈ N, in the tree generated by M there exists a branch with at least n appearances of every constant from a set A\", giving its decidability.", "Deciding this property is known under the names simultaneous unboundedness problem (SUP) and diagonal problem (these are two different names for the same problem).", "SUP for recursion schemes was first solved in Clemente, Parys, Salvati, and Walukiewicz #OTHEREFR , in a different way.", "The advantage of solving SUP using the type system presented here is twofold." ]
[ "type system" ]
method
{ "title": "Intersection Types for Unboundedness Problems", "abstract": "Intersection types have been originally developed as an extension of simple types, but they can also be used for refining simple types. In this survey we concentrate on the latter option; more precisely, on the use of intersection types for describing quantitative properties of simply typed lambda-terms. We present two type systems. The first allows to estimate (by appropriately defined value of a derivation) the number of appearances of a fixed constant a in the beta-normal form of a considered lambdaterm. The second type system is more complicated, and allows to estimate the maximal number of appearances of the constant a on a single branch." }
{ "title": "Intersection Types and Counting", "abstract": "We present a new approach to the following meta-problem: given a quantitative property of trees, design a type system such that the desired property for the tree generated by an infinitary ground λ -term corresponds to some property of a derivation of a type for this λ -term, in this type system. Our approach is presented in the particular case of the language finiteness problem for nondeterministic higher-order recursion schemes (HORSes): given a nondeterministic HORS, decide whether the set of all finite trees generated by this HORS is finite. We give a type system such that the HORS can generate a tree of an arbitrarily large finite size if and only if in the type system we can obtain derivations that are arbitrarily large, in an appropriate sense; the latter condition can be easily decided." }
1811.02034
1309.4334
Handling of Exceptions and Code Changes
To correctly detect code changes in the debugger project, the IDRA Changes Handler leverages on Epicea #REFR , an existent library for handling such events.
[ "The restart queue is needed to keep track of the debugging sessions that were already sent to the IDRA Manager.", "It allows IDRA to restart the failed debugging session after the developer commits its fix to the remote machine.", "Figure 5 shows how, after the developer produces and commits a fix, this triggers a re-execution in the debugged application.", "This re-execution phase can happen following different strategies, depending on the use case.", "For instance we provide specific restarting strategies for test execution, or for tasks scheduled using the TaskIt library [3] ." ]
[]
[ "debugger project" ]
method
{ "title": "Out-Of-Place debugging: a debugging architecture to reduce debugging interference", "abstract": "Abstract Context Recent studies show that developers spend most of their programming time testing, verifying and debugging software. As applications become more and more complex, developers demand more advanced debugging support to ease the software development process. Inquiry Since the 70's many debugging solutions have been introduced. Amongst them, online debuggers provide good insight on the conditions that led to a bug, allowing inspection and interaction with the variables of the program. However, most of the online debugging solutions introduce debugging interference to the execution of the program, i.e. pauses, latency, and evaluation of code containing side-effects. Approach This paper investigates a novel debugging technique called out-of-place debugging. The goal is to minimize the debugging interference characteristic of online debugging while allowing online remote capabilities. An out-of-place debugger transfers the program execution and application state from the debugged application to the debugger application, each running in a different process. Knowledge On the one hand, out-of-place debugging allows developers to debug applications remotely, overcoming the need of physical access to the machine where the debugged application is running. On the other hand, debugging happens locally on the remote machine avoiding latency. That makes it suitable to be deployed on a distributed system and handle the debugging of several processes running in parallel. Grounding We implemented a concrete out-of-place debugger for the Pharo Smalltalk programming language. We show that our approach is practical by running several benchmarks, comparing our approach with a classic remote online debugger. We show that our prototype debugger outperforms a traditional remote debugger by 1000 times in several scenarios. Moreover, we show that the presence of our debugger does not impact the overall performance of an application. Importance This work combines remote debugging with the debugging experience of a local online debugger. Out-of-place debugging is the first online debugging technique that can minimize debugging interference while debugging a remote application. Yet, it still keeps the benefits of online debugging (e.g., step-by-step execution). This makes the technique suitable for modern applications which are increasingly parallel, distributed and reactive to streams of data from various sources like sensors, UI, network, etc. Software and its engineering → Software testing and debugging;" }
{ "title": "Representing Code History with Development Environment Events", "abstract": "Modern development environments handle information about the intent of the programmer: for example, they use abstract syntax trees for providing high-level code manipulation such as refactorings; nevertheless, they do not keep track of this information in a way that would simplify code sharing and change understanding. In most Smalltalk systems, source code modifications are immediately registered in a transaction log often called a ChangeSet. Such mechanism has proven reliability, but it has several limitations. In this paper we analyse such limitations and describe scenarios and requirements for tracking fine-grained code history with a semantic representation. We present Epicea, an early prototype implementation. We want to enrich code sharing with extra information from the IDE, which will help understanding the intention of the changes and let a new generation of tools act in consequence." }
1911.11932
1711.05225
B. Attacker Agenda
User Privacy Violation: smart devices are increasingly trusted with private user data such as shopping history, voice commands, or medical recordings #REFR .
[ "The y-axis in Figure 1 represents the attacker's motivation for attacking an edge-deployed neural network.", "We classify attacker motivations into four categories: Denial of Service: attackers may want to prevent a device running a neural network from properly functioning.", "For example, attackers may want to prevent smart cameras from properly classifying recordings in order not to raise alarms.", "Denial of Service (DoS) attacks prevent a device from maintaining availability and completing its function.", "As feedforward neural networks are data-independent and have fixed latencies, DoS attacks targeting DNNs are only applicable to accelerators running data-dependent models, e.g., recurrent neural networks #OTHEREFR or neural networks with early exits like BranchyNets #OTHEREFR or Tree LSTMs #OTHEREFR ." ]
[ "This data is valuable for its advertising, monitoring, or polling value.", "User privacy violations are cases where the attacker is able to access measured or stored sensor data from the device or user data the device from the network.", "For example, attacks on voice assistants where the attacker can access previous voice commands constitute a local privacy violation.", "Model Privacy Violation: the attacker may attempt to exfiltrate a neural network model for a number of reasons: (1) models require significant investment to develop, and, as such, may be stolen and sold, or used in ensembles as a black box #OTHEREFR , #OTHEREFR finding adversarial examples is significantly easier if the attacker has access to a model (i.e., the white-box scenario), compared to only having access to model inputs and outputs (i.e., the black-box scenario) #OTHEREFR , or (3) the attacker may attempt to learn data from the dataset the model was trained on #OTHEREFR .", "Integrity Violation: the attackers may not want to outright prevent the device from functioning, but may want to force the neural network to perform in an unacceptable way." ]
[ "smart devices", "medical recordings" ]
background
{ "title": "Survey of Attacks and Defenses on Edge-Deployed Neural Networks", "abstract": "Deep Neural Network (DNN) workloads are quickly moving from datacenters onto edge devices, for latency, privacy, or energy reasons. While datacenter networks can be protected using conventional cybersecurity measures, edge neural networks bring a host of new security challenges. Unlike classic IoT applications, edge neural networks are typically very compute and memory intensive, their execution is data-independent, and they are robust to noise and faults. Neural network models may be very expensive to develop, and can potentially reveal information about the private data they were trained on, requiring special care in distribution. The hidden states and outputs of the network can also be used in reconstructing user inputs, potentially violating users' privacy. Furthermore, neural networks are vulnerable to adversarial attacks, which may cause misclassifications and violate the integrity of the output. These properties add challenges when securing edge-deployed DNNs, requiring new considerations, threat models, priorities, and approaches in securely and privately deploying DNNs to the edge. In this work, we cover the landscape of attacks on, and defenses, of neural networks deployed in edge devices and provide a taxonomy of attacks and defenses targeting edge DNNs." }
{ "title": "CheXNet: Radiologist-Level Pneumonia Detection on Chest X-Rays with Deep Learning", "abstract": "We develop an algorithm that can detect pneumonia from chest X-rays at a level exceeding practicing radiologists. Our algorithm, CheXNet, is a 121-layer convolutional neural network trained on ChestX-ray14, currently the largest publicly available chest Xray dataset, containing over 100,000 frontalview X-ray images with 14 diseases. Four practicing academic radiologists annotate a test set, on which we compare the performance of CheXNet to that of radiologists. We find that CheXNet exceeds average radiologist performance on the F1 metric. We extend CheXNet to detect all 14 diseases in ChestX-ray14 and achieve state of the art results on all 14 diseases." }
1904.02633
1711.05225
Evaluation of Generated Reports
Learning Meaningful Attention Maps Attention maps have been a useful tool in visualizing what a neural network is attending to, as demonstrated by #REFR .
[ "For example, TieNet is prone to generate nasogastric tube mentions while our model tends to mention tracheostomy or endotracheal tube, and yet both models have difficulty identifying some specific lines such as chest tube or PICC line.", "Similarly, both systems do not generate the sentence with positive lung parenchymal findings correctly.", "From this (small) sample, we are unable to draw a conclusion whether our model or TieNet truly outperforms the other since both present with significant issues and each has strengths the other lacks.", "Critically, neither of them can describe the majority of the findings in the chest radiograph well, especially for positive cases, even if the quantitative metrics demonstrate the reasonable performance of the models.", "This illustrates that significant progress is still needed in this domain, perhaps building on the directions we explore here before these techniques could be deployed in a clinical environment." ]
[ "Figure 3 shows the intermediate attention maps for each word when it is being generated.", "As we can observe, the model is able to roughly capture the location of the indicated disease or parts, but we also find, interestingly, that the attention map tends to be the complement of the actual region of interest when the disease keywords follow a negation cue word.", "This might indicate that the model is actively looking at the rest of the image to ensure it does not miss any possible symptoms exhibited before asserting disease-free states.", "This behavior has not been widely discussed before, partially because attention maps for negations are not the primary focus of typical image captioning tasks, and most attention mechanisms employed in a clinical context were on classification tasks where negation is formulated differently." ]
[ "Meaningful Attention Maps", "neural network" ]
background
{ "title": "Clinically Accurate Chest X-Ray Report Generation", "abstract": "The automatic generation of radiology reports given medical radiographs has significant potential to operationally and clinically improve patient care. A number of prior works have focused on this problem, employing advanced methods from computer vision and natural language generation to produce readable reports. However, these works often fail to account for the particular nuances of the radiology domain, and, in particular, the critical importance of clinical accuracy in the resulting generated reports. In this work, we present a domain-aware automatic chest X-Ray radiology report generation system which first predicts what topics will be discussed in the report, then conditionally generates sentences corresponding to these topics. The resulting system is fine-tuned using reinforcement learning, considering both readability and clinical accuracy, as assessed by the proposed Clinically Coherent Reward. We verify this system on two datasets, Open-I and MIMIC-CXR, and demonstrate that our model offers marked improvements on both language generation metrics and CheXpert assessed accuracy over a variety of competitive baselines." }
{ "title": "CheXNet: Radiologist-Level Pneumonia Detection on Chest X-Rays with Deep Learning", "abstract": "We develop an algorithm that can detect pneumonia from chest X-rays at a level exceeding practicing radiologists. Our algorithm, CheXNet, is a 121-layer convolutional neural network trained on ChestX-ray14, currently the largest publicly available chest Xray dataset, containing over 100,000 frontalview X-ray images with 14 diseases. Four practicing academic radiologists annotate a test set, on which we compare the performance of CheXNet to that of radiologists. We find that CheXNet exceeds average radiologist performance on the F1 metric. We extend CheXNet to detect all 14 diseases in ChestX-ray14 and achieve state of the art results on all 14 diseases." }
1801.09927
1711.05225
C. Evaluation
In nearly all the 14 classes, our method yields best performance. Only Rajpurkar et al. #REFR report higher accuracy on Hernia.
[ "Comparing with these methods, this paper contributes new state of the art to the community: average AUC = 0.871.", "AG-CNN exceeds the previous state of the art #OTHEREFR by 2.9%.", "AUC scores of pathologies such as Cardiomegaly and Infltration are higher than #OTHEREFR by about 0.03.", "AUC scores of Mass, Fibrosis and Consolidation surpass #OTHEREFR by about 0.05.", "Furthermore, we train AG-CNN with 70% of the dataset, but 80% are used in #OTHEREFR , #OTHEREFR ." ]
[ "In all, the classification accuracy reported in this paper compares favorably against previous art.", "Variant of training strategy analysis.", "Training three branches with different orders influences the performance of AG-CNN.", "We perform 4 orders to train AG-CNN: 1) train global branch first, and then local and fusion branch together (G LF); 2) train global and local branch together, and then fusion branch (GL F); 3) train three branches together (GLF); 4) train global, local and fusion branch sequentially (G L F).", "Note that G L F is our three-stage training strategy." ]
[ "higher accuracy" ]
result
{ "title": "Diagnose like a Radiologist: Attention Guided Convolutional Neural Network for Thorax Disease Classification", "abstract": "Abstract-This paper considers the task of thorax disease classification on chest X-ray images. Existing methods generally use the global image as input for network learning. Such a strategy is limited in two aspects. 1) A thorax disease usually happens in (small) localized areas which are disease specific. Training CNNs using global image may be affected by the (excessive) irrelevant noisy areas. 2) Due to the poor alignment of some CXR images, the existence of irregular borders hinders the network performance. In this paper, we address the above problems by proposing a three-branch attention guided convolution neural network (AG-CNN). AG-CNN 1) learns from disease-specific regions to avoid noise and improve alignment, 2) also integrates a global branch to compensate the lost discriminative cues by local branch. Specifically, we first learn a global CNN branch using global images. Then, guided by the attention heat map generated from the global branch, we inference a mask to crop a discriminative region from the global image. The local region is used for training a local CNN branch. Lastly, we concatenate the last pooling layers of both the global and local branches for fine-tuning the fusion branch. The comprehensive experiment is conducted on the ChestX-ray14 dataset. We first report a strong global baseline producing an average AUC of 0.841 with ResNet-50 as backbone. After combining the local cues with the global information, AG-CNN improves the average AUC to 0.868. While DenseNet-121 is used, the average AUC achieves 0.871, which is a new state of the art in the community." }
{ "title": "CheXNet: Radiologist-Level Pneumonia Detection on Chest X-Rays with Deep Learning", "abstract": "We develop an algorithm that can detect pneumonia from chest X-rays at a level exceeding practicing radiologists. Our algorithm, CheXNet, is a 121-layer convolutional neural network trained on ChestX-ray14, currently the largest publicly available chest Xray dataset, containing over 100,000 frontalview X-ray images with 14 diseases. Four practicing academic radiologists annotate a test set, on which we compare the performance of CheXNet to that of radiologists. We find that CheXNet exceeds average radiologist performance on the F1 metric. We extend CheXNet to detect all 14 diseases in ChestX-ray14 and achieve state of the art results on all 14 diseases." }
1808.05744
1711.05225
Introduction
Chest X-rays is the most common imaging examinations in practice, with approximately 2 billion procedures per year #REFR .
[ "It is a relatively easy task for radiologists to read and diagnose chest X-ray images.", "However, teaching a computer to process hospital-scale of chest X-ray scans is extremely challenging." ]
[ "The success of chest X-ray disease detection will lay the groundwork for more complex systems to provide consistent, trustable and interpretable second opinions on reading medical images of all kinds of modalities.", "Deep Learning methods have been applied to disease classification, sensitive area localization and tissue segmentation #OTHEREFR .", "The success of deep learning has made computer program an indispensable aid to physicians for disease analysis #OTHEREFR .", "\"ChestX-ray14\" is so far the largest publicly available chest X-rays dataset #OTHEREFR .", "Along with the collection of the dataset, baseline models were also tested on this dataset. The best is a 50 layers ResNet." ]
[ "common imaging examinations", "Chest X" ]
background
{ "title": "Dynamic Routing on Deep Neural Network for Thoracic Disease Classification and Sensitive Area Localization", "abstract": "Abstract. We present and evaluate a new deep neural network architecture for automatic thoracic disease detection on chest X-rays. Deep neural networks have shown great success in a plethora of visual recognition tasks such as image classification and object detection by stacking multiple layers of convolutional neural networks (CNN) in a feed-forward manner. However, the performance gain by going deeper has reached bottlenecks as a result of the trade-off between model complexity and discrimination power. We address this problem by utilizing the recently developed routing-by agreement mechanism in our architecture. A novel characteristic of our network structure is that it extends routing to two types of layer connections (1) connection between feature maps in dense layers, (2) connection between primary capsules and prediction capsules in final classification layer. We show that our networks achieve comparable results with much fewer layers in the measurement of AUC score. We further show the combined benefits of model interpretability by generating Gradient-weighted Class Activation Mapping (Grad-CAM) for localization. We demonstrate our results on the NIH chestX-ray14 dataset that consists of 112,120 images on 30,805 unique patients including 14 kinds of lung diseases." }
{ "title": "CheXNet: Radiologist-Level Pneumonia Detection on Chest X-Rays with Deep Learning", "abstract": "We develop an algorithm that can detect pneumonia from chest X-rays at a level exceeding practicing radiologists. Our algorithm, CheXNet, is a 121-layer convolutional neural network trained on ChestX-ray14, currently the largest publicly available chest Xray dataset, containing over 100,000 frontalview X-ray images with 14 diseases. Four practicing academic radiologists annotate a test set, on which we compare the performance of CheXNet to that of radiologists. We find that CheXNet exceeds average radiologist performance on the F1 metric. We extend CheXNet to detect all 14 diseases in ChestX-ray14 and achieve state of the art results on all 14 diseases." }
1708.05924
1711.05225
Transfer Learning
The idea is that most of the learned knowledge on dataset S can be used in the target dataset with a small amount of additional training. This idea works well in image processing (e.g. #REFR ) and considerably reduces the training time.
[ "Transfer learning #OTHEREFR has been an active and successful field of research in machine learning and especially in image processing.", "In transfer learning, there is a source dataset S and a trained neural network to perform a given task, e.g. classification, regression, or decisioning through RL.", "Training such networks may take a few days or even weeks.", "So, for similar or even slightly different target datasets T, one can avoid training a new network from scratch and instead use the same trained network with a few customizations." ]
[ "In order to use transfer learning in the beer game, assume there exists a source agent i ∈ {1, 2, 3, 4} with trained network S i (with a fixed size on all agents), parameters", "demand distribution D 1 , and co-player policy π 1 .", "The weight matrix W i contains the learned weights such that W q i denotes the weight between layers q and q + 1 of the neural network, where q ∈ {0, . . .", ", nh}, and nh is the number of hidden layers.", "The aim is to train a neural network S j for target agent j ∈ {1, 2, 3, 4}, j = i." ]
[ "target", "dataset" ]
background
{ "title": "A Deep Q-Network for the Beer Game : Reinforcement Learning for Inventory Optimization", "abstract": "Problem definition: The beer game is a widely used game that is played in supply chain management classes to demonstrate the bullwhip effect and the importance of supply chain coordination. The game is a decentralized, multi-agent, cooperative problem that can be modeled as a serial supply chain network in which agents choose order quantities while cooperatively attempting to minimize the network's total cost, even though each agent only observes its own local information. Academic/practical relevance: Under some conditions, a base-stock replenishment policy is optimal. However, in a decentralized supply chain in which some agents act irrationally, there is no known optimal policy for an agent wishing to act optimally. We propose a reinforcement learning (RL) algorithm, based on deep Q-networks, to play the beer game. Our algorithm has no limits on costs and other beer game settings. Like any deep RL algorithm, training can be computationally intensive, but this can be performed ahead of time; the algorithm executes in real time when the game is played. Moreover, we propose a transfer-learning approach so that the training performed for one agent can be adapted quickly for other agents and settings. Results: When playing with teammates who follow a base-stock policy, our algorithm obtains near-optimal order quantities. More importantly, it performs significantly better than a base-stock policy when other agents use a more realistic model of human ordering behavior. Finally, applying transfer-learning reduces the training time by one order of magnitude. Managerial implications: This paper shows how artificial intelligence can be applied to inventory optimization. Our approach can be extended to other supply chain optimization problems, especially those in which supply chain partners act in irrational or unpredictable ways." }
{ "title": "CheXNet: Radiologist-Level Pneumonia Detection on Chest X-Rays with Deep Learning", "abstract": "We develop an algorithm that can detect pneumonia from chest X-rays at a level exceeding practicing radiologists. Our algorithm, CheXNet, is a 121-layer convolutional neural network trained on ChestX-ray14, currently the largest publicly available chest Xray dataset, containing over 100,000 frontalview X-ray images with 14 diseases. Four practicing academic radiologists annotate a test set, on which we compare the performance of CheXNet to that of radiologists. We find that CheXNet exceeds average radiologist performance on the F1 metric. We extend CheXNet to detect all 14 diseases in ChestX-ray14 and achieve state of the art results on all 14 diseases." }
1811.10947
1711.05225
Introduction
For instance, obtaining large samples of speech recordings or x-ray scans is substantially easier than providing an accurate label to each sample #REFR .
[ "The goal of a classifier is to predict the class label y ∈ Y of an object with features x ∈ X .", "Supervised learning of classifiers requires data pairs (x, y), but obtaining labels y for every observed feature x is a costly and/or time-consuming process. This limitation prohibits learning accurate classifiers in many scenarios.", "By contrast, obtaining unlabeled data x alone is often considerably simpler than labeled data (x, y)." ]
[ "This motivates the development of semi-supervised methods that leverage large amounts of unlabeled data in addition to a more limited labeled dataset, denoted D 0 = {x i } and D 1 = {(x i , y i )}, respectively.", "That is, methods applicable to scenarios in which |D 0 | |D 1 |.", "Missing data is a well-studied area in statistics #OTHEREFR .", "To provide description of the fundamental statistical limitations of semi-supervised learning, we consider each feature/label pair to be drawn from an underlying data-generating distribution,", "where ∈ {0, 1} is an indicator denoting whether the class label y is missing or observed." ]
[ "accurate label", "x-ray scans" ]
background
{ "title": "Reliable Semi-Supervised Learning when Labels are Missing at Random", "abstract": "Semi-supervised learning methods are motivated by the availability of large datasets with unlabeled features in addition to labeled data. Unlabeled data is, however, not guaranteed to improve classification performance and has in fact been reported to impair the performance in certain cases. A fundamental source of error arises from restrictive assumptions about the unlabeled features, which result in unreliable classifiers. In this paper, we develop a semi-supervised learning approach that relaxes such assumptions and is capable of providing classifiers that reliably measure the label uncertainty. The approach is applicable using any generative model with a supervised learning algorithm. We illustrate the approach using both handwritten digit and cloth classification data where the labels are missing at random." }
{ "title": "CheXNet: Radiologist-Level Pneumonia Detection on Chest X-Rays with Deep Learning", "abstract": "We develop an algorithm that can detect pneumonia from chest X-rays at a level exceeding practicing radiologists. Our algorithm, CheXNet, is a 121-layer convolutional neural network trained on ChestX-ray14, currently the largest publicly available chest Xray dataset, containing over 100,000 frontalview X-ray images with 14 diseases. Four practicing academic radiologists annotate a test set, on which we compare the performance of CheXNet to that of radiologists. We find that CheXNet exceeds average radiologist performance on the F1 metric. We extend CheXNet to detect all 14 diseases in ChestX-ray14 and achieve state of the art results on all 14 diseases." }
1909.01940
1711.05225
RESULTS
Training and testing on ChestX-ray14 achieves results similar to the ones reported on #REFR .
[ "We train the same neural network architecture with the same hyperparameters at each of the three datasets individually." ]
[ "After training, we load our model and evaluate it with images from the remaining two.", "We summarize our results in Table 2 .", "We can see that the best results for each test set appear when the training set is from the same dataset.", "This shows that clinicians should expect a decrease in the reported performances of machine learning models when applying them in real-world scenarios.", "This decrease may vary according to the dataset distribution in which the model was trained." ]
[ "Training" ]
result
{ "title": "Can we trust deep learning models diagnosis? The impact of domain shift in chest radiograph classification", "abstract": "While deep learning models become more widespread, their ability to handle unseen data and generalize for any scenario is yet to be challenged. In medical imaging, there is a high heterogeneity of distributions among images based on the equipment that generate them and their parametrization. This heterogeneity triggers a common issue in machine learning called domain shift, which represents the difference between the training data distribution and the distribution of where a model is employed. A high domain shift tends to implicate in a poor performance from models. In this work, we evaluate the extent of domain shift on three of the largest datasets of chest radiographs. We show how training and testing with different datasets (e.g. training in ChestX-ray14 and testing in CheXpert) drastically affects model performance, posing a big question over the reliability of deep learning models." }
{ "title": "CheXNet: Radiologist-Level Pneumonia Detection on Chest X-Rays with Deep Learning", "abstract": "We develop an algorithm that can detect pneumonia from chest X-rays at a level exceeding practicing radiologists. Our algorithm, CheXNet, is a 121-layer convolutional neural network trained on ChestX-ray14, currently the largest publicly available chest Xray dataset, containing over 100,000 frontalview X-ray images with 14 diseases. Four practicing academic radiologists annotate a test set, on which we compare the performance of CheXNet to that of radiologists. We find that CheXNet exceeds average radiologist performance on the F1 metric. We extend CheXNet to detect all 14 diseases in ChestX-ray14 and achieve state of the art results on all 14 diseases." }
1811.08615
1711.05225
Methods
Images are resized to 256×256, then featurized to the last bottleneck layer of a pretrained DenseNet-121 model #REFR .
[ "Our overall experimental flow follows Figure 1 .", "Notes are featurized via (1) term frequencyinverse document frequency (TF-IDF) over bi-grams, (2) pre-trained GloVe word embeddings #OTHEREFR averaged across the selected section of the report, (3) sentence embeddings, or (4) paragraph embeddings.", "In (3) and (4), we first perform sentence/paragraph splitting, and then fine-tune a deep averaging network (DAN) encoder #OTHEREFR with the corpus. Embeddings are finally averaged across sentences/paragraphs.", "The DAN encoder is pretrained on a variety of data sources and tasks and fine-tuned on the context of report sections." ]
[ "PCA is applied onto the 1024-dimension raw image features to obtain 64-dimension features.", "#OTHEREFR Text features are projected into the 64-dimension image feature space. We use several methods regarding different objectives.", "Embedding Alignment (EA) Here, we find a linear transformation between two sets of matched points Adversarial Domain Adaption (Adv) Adversarial training pits a discriminator, D, implemented as a 2-layer (hidden size 256) neural network using scaled exponential linear units (SELUs) #OTHEREFR , against a projection matrix W, as the generator.", "D is trained to classify points in the joint space according to source modality, and W is trained adversarially to fool D.", "Procrustes Refinement (Adv + Proc) On top of adversarial training, we also use an unsupervised Procrustes induced refinement as in #OTHEREFR ." ]
[ "pretrained DenseNet-121 model" ]
method
{ "title": "Unsupervised Multimodal Representation Learning across Medical Images and Reports", "abstract": "Joint embeddings between medical imaging modalities and associated radiology reports have the potential to offer significant benefits to the clinical community, ranging from cross-domain retrieval to conditional generation of reports to the broader goals of multimodal representation learning. In this work, we establish baseline joint embedding results measured via both local and global retrieval methods on the soon to be released MIMIC-CXR dataset consisting of both chest X-ray images and the associated radiology reports. We examine both supervised and unsupervised methods on this task and show that for document retrieval tasks with the learned representations, only a limited amount of supervision is needed to yield results comparable to those of fully-supervised methods." }
{ "title": "CheXNet: Radiologist-Level Pneumonia Detection on Chest X-Rays with Deep Learning", "abstract": "We develop an algorithm that can detect pneumonia from chest X-rays at a level exceeding practicing radiologists. Our algorithm, CheXNet, is a 121-layer convolutional neural network trained on ChestX-ray14, currently the largest publicly available chest Xray dataset, containing over 100,000 frontalview X-ray images with 14 diseases. Four practicing academic radiologists annotate a test set, on which we compare the performance of CheXNet to that of radiologists. We find that CheXNet exceeds average radiologist performance on the F1 metric. We extend CheXNet to detect all 14 diseases in ChestX-ray14 and achieve state of the art results on all 14 diseases." }
2003.00682
1711.05225
Related work
The paper #REFR suggested a network for deep convolutional 121-layer with the dataset of chestX-ray14.
[ "So, a complete study is requisite to conduct deep learning with power over thousand patients' samples to obtain the reliable and accurate predictions.", "In #OTHEREFR offered the significance of AI with a state of art in the classification of chest X-ray and analysis.", "Furthermore, the work #OTHEREFR described this issue besides organized a novel 108,948 front outlook database ChestX-ray8 where the 32,717 X-ray images was unique patients.", "They conducted deep CNNs to validate results on this lung data as well as achieved promising results.", "In #OTHEREFR also addressed that database of chestX-ray8 can be prolonged by containing various classes disease and would be valuable for another research study." ]
[ "Publicly available in this dataset has X-ray images for fourteen diseases.", "They also addressed that their algorithm has been provided very high efficiency.", "The paper #OTHEREFR described that a dataset for big labeled is the point of achievement for classification tasks and prediction.", "They offered a big dataset that contains of 224,316 radiographic chest images from 65,240 patients. CheXpert is the name of the dataset.", "Formerly, they conducted CNNs to indicate labels to this dataset constructed on the prospect indicated by model." ]
[ "deep convolutional 121-layer" ]
background
{ "title": "Disease Detection from Lung X-ray Images based on Hybrid Deep Learning", "abstract": "Lung Disease can be considered as the second most common type of disease for men and women. Many people die of lung disease such as lung cancer, Asthma, CPD (Chronic pulmonary disease) etc. in every year. Early detection of lung cancer can lessen the probability of deaths. In this paper, a chest X ray image dataset has been used in order to diagnosis properly and analysis the lung disease. For binary classification, some important is selected. The criteria include precision, recall, F beta score and accuracy. The fusion of AI and cancer diagnosis are acquiring huge interest as a cancer diagnostic tool. In recent days, deep learning based AI for example Convolutional neural network (CNN) can be successfully applied for disease classification and prediction. This paper mainly focuses the performance of Vanilla neural network, CNN, fusion of CNN and Visual Geometry group based neural network (VGG), fusion of CNN, VGG, STN and finally Capsule network. Normally basic CNN has poor performance for rotated, tilted or other abnormal image orientation. As a result, hybrid systems have been exhibited in order to enhance the accuracy with the maintenance of less training time. All models have been implemented in two groups of data sets: full dataset and sample dataset. Therefore, a comparative analysis has been developed in this paper. Some visualization of the attributes of the dataset has also been showed in this paper." }
{ "title": "CheXNet: Radiologist-Level Pneumonia Detection on Chest X-Rays with Deep Learning", "abstract": "We develop an algorithm that can detect pneumonia from chest X-rays at a level exceeding practicing radiologists. Our algorithm, CheXNet, is a 121-layer convolutional neural network trained on ChestX-ray14, currently the largest publicly available chest Xray dataset, containing over 100,000 frontalview X-ray images with 14 diseases. Four practicing academic radiologists annotate a test set, on which we compare the performance of CheXNet to that of radiologists. We find that CheXNet exceeds average radiologist performance on the F1 metric. We extend CheXNet to detect all 14 diseases in ChestX-ray14 and achieve state of the art results on all 14 diseases." }
1712.07632
1711.05225
Introduction
For example, recently, CheXNet model was announced that can automatically detect pneumonia from chest X-rays at a level exceeding practicing radiologists #REFR .
[ "Chest X-ray (CXR) imaging is currently the most popular and the most available diagnostic tool for health monitoring and diagnosing many lung diseases, including pneumonia, tuberculosis, cancer, etc.", "However, detecting marks of these diseases from CXRs is a very complicated procedure, which takes involvement of the expert radiologists.", "Application of CXRs is postponed by long manual analysis and detection of lung cancer and it is limited also by shortage of experts.", "For example, in China the annual number of the diagnosed lung cancer cases is huge (>600 thousands), but the number of certified radiologists is low (<80 thousands) for the nation-wide screening of >1.4 billion of citizens.", "Meanwhile, the recent disruptive progress of computing, especially computing on the general purpose graphic processing units (GPU) #OTHEREFR , machine learning, and especially deep learning #OTHEREFR , for image recognition brings a meaningful effect." ]
[ "That is why any automated assistance tools and related machine learning techniques are of great importance for the faster and better identification, classification and segmentation of suspicious regions (like lesions, nodules, etc.) for the subsequent diagnostic.", "The main aim of this paper is to demonstrate efficiency of lung segmentation and bone shadow exclusion techniques for analysis of 2D CXRs by deep learning approach to help radiologists identify suspicious regions in lung cancer patients." ]
[ "chest X" ]
background
{ "title": "Deep Learning with Lung Segmentation and Bone Shadow Exclusion Techniques for Chest X-Ray Analysis of Lung Cancer", "abstract": "Abstract. The recent progress of computing, machine learning, and especially deep learning, for image recognition brings a meaningful effect for automatic detection of various diseases from chest X-ray images (CXRs). Here efficiency of lung segmentation and bone shadow exclusion techniques is demonstrated for analysis of 2D CXRs by deep learning approach to help radiologists identify suspicious lesions and nodules in lung cancer patients. Training and validation was performed on the original JSRT dataset (dataset #01), BSE-JSRT dataset, i.e. the same JSRT dataset, but without clavicle and rib shadows (dataset #02), original JSRT dataset after segmentation (dataset #03), and BSE-JSRT dataset after segmentation (dataset #04). The results demonstrate the high efficiency and usefulness of the considered pre-processing techniques in the simplified configuration even. The pre-processed dataset without bones (dataset #02) demonstrates the much better accuracy and loss results in comparison to the other pre-processed datasets after lung segmentation (datasets #02 and #03)." }
{ "title": "CheXNet: Radiologist-Level Pneumonia Detection on Chest X-Rays with Deep Learning", "abstract": "We develop an algorithm that can detect pneumonia from chest X-rays at a level exceeding practicing radiologists. Our algorithm, CheXNet, is a 121-layer convolutional neural network trained on ChestX-ray14, currently the largest publicly available chest Xray dataset, containing over 100,000 frontalview X-ray images with 14 diseases. Four practicing academic radiologists annotate a test set, on which we compare the performance of CheXNet to that of radiologists. We find that CheXNet exceeds average radiologist performance on the F1 metric. We extend CheXNet to detect all 14 diseases in ChestX-ray14 and achieve state of the art results on all 14 diseases." }
1904.02805
1711.05225
Introduction
They find that deep learning systems rival expert radiologists, as is the recent paper of Rajpurkar et al. when having radiologists diagnosing pneumonia #REFR . Arevalo et al.
[ "Visual search may also be trivial as in the previous example or may require stronger degrees of expertise accumulated even over many years such as radiologists searching for tumours in mammograms, as well as military surveillance operators, or TSA agents who must go over a high collection of images in the shortest amount of time.", "Indeed the successes of Deep Learning Systems have already been shown to compete with Dermatologists in #OTHEREFR as well as Radiologists #OTHEREFR for cancerous tumor detections.", "Most of the expert systems work has been explored in the medical imaging domain, more specifically in radiology. Litjens et al.", "#OTHEREFR compiled an overview of 300 Deep Learning papers applied to medical imaging.", "In the work of Kooi et al., CNN's and other Computer Aided Detection and Diagnosis (CAD) classifiers are compared to each other as automatic diagnosis agents #OTHEREFR ." ]
[ "benchmark CNN's to classical computer vision models such as HOG and explore the learned representations by such deep networks in the first convolutional layer #OTHEREFR .", "The majority of studies have evaluated automated intelligent agents via classical computer vision or endto-end deep learning architectures v.s. humans. See Litjens et al.", "#OTHEREFR for an overview of 300 Deep Learning papers applied to medical imaging.", "Other bodies of work regarding collaborative humanmachine scenarios in computer vision tasks include: image annotation #OTHEREFR , machine teaching #OTHEREFR , visual conversational agents #OTHEREFR , cognitive optimization #OTHEREFR , and fined-grained categorization #OTHEREFR .", "Conversely, there has also been a recent trend comparing humans against machines in certain tasks with the goal of finding potential biological constraints that are missing in deep networks." ]
[ "deep learning systems" ]
background
{ "title": "Assessment of Faster R-CNN in Man-Machine Collaborative Search", "abstract": "Abstract" }
{ "title": "CheXNet: Radiologist-Level Pneumonia Detection on Chest X-Rays with Deep Learning", "abstract": "We develop an algorithm that can detect pneumonia from chest X-rays at a level exceeding practicing radiologists. Our algorithm, CheXNet, is a 121-layer convolutional neural network trained on ChestX-ray14, currently the largest publicly available chest Xray dataset, containing over 100,000 frontalview X-ray images with 14 diseases. Four practicing academic radiologists annotate a test set, on which we compare the performance of CheXNet to that of radiologists. We find that CheXNet exceeds average radiologist performance on the F1 metric. We extend CheXNet to detect all 14 diseases in ChestX-ray14 and achieve state of the art results on all 14 diseases." }
1901.07441
1711.05225
arXiv:1901.07441v2 [eess.IV] 7 Feb 2019
Using the same repository, #REFR extended the annotations to 14 different pathologies (ChestX-Ray14) and designed a model with a deeper CNN architecture to classify images as 14 pathological entities.
[ "area of research in recent years #OTHEREFR .", "For instance, #OTHEREFR trained a Convolutional Neural Network (CNN) to classify and localize 8 pathologies using the chest x-ray database (ChestX-Ray8) which comprised 108,948 frontal-view x-ray images of 32,717 different patients." ]
[ "This method was reported to obtain greater diagnostic efficiency in the detection of pneumonias when compared to that of radiologists.", "#OTHEREFR proposed the attention guided CNN to help combine global and local information in order to improve recognition performance.", "Chest-XRay14 was also employed by #OTHEREFR to design a network architecture that combines text and image with attention mechanisms capable of generating text that describes the image, while #OTHEREFR introduced a hierarchical model of Recurrent Neural Networks (RNN) with which to generate long paragraphs from the images and obtain semantically linked phrases for the same purpose.", "Despite claims that they achieve and/or surpass physician-level performance, current deep learning models for the classification of pathologies using chest x-rays are proving not to be generalizable across institutions and not yet ready for adoption in real-world clinical settings #OTHEREFR .", "Moreover, warnings of potential unintended consequences of their use are discussed by #OTHEREFR ." ]
[ "deeper CNN architecture" ]
method
{ "title": "PadChest: A large chest x-ray image dataset with multi-label annotated reports", "abstract": "We present a labeled large-scale, high resolution chest x-ray dataset for the automated exploration of medical images along with their associated reports. This dataset includes more than 160,000 images obtained from 67,000 patients that were interpreted and reported by radiologists at Hospital San Juan Hospital (Spain) from 2009 to 2017, covering six different position views and additional information on image acquisition and patient demography. The reports were labeled with 174 different radiographic findings, 19 differential diagnoses and 104 anatomic locations organized as a hierarchical taxonomy and mapped onto standard Unified Medical Language System (UMLS) terminology. Of these reports, 27% were manually annotated by trained physicians and the remaining set was labeled using a supervised method based on a recurrent neural network with attention mechanisms. The labels generated were then validated in an independent test set achieving a 0.93 Micro-F1 score. To the best of our knowledge, this is one of the largest public chest x-ray database suitable for training supervised models concerning radiographs, and the first to contain radiographic reports in Spanish. The PadChest dataset can be downloaded from http://bimcv.cipf.es/bimcv-projects/padchest/." }
{ "title": "CheXNet: Radiologist-Level Pneumonia Detection on Chest X-Rays with Deep Learning", "abstract": "We develop an algorithm that can detect pneumonia from chest X-rays at a level exceeding practicing radiologists. Our algorithm, CheXNet, is a 121-layer convolutional neural network trained on ChestX-ray14, currently the largest publicly available chest Xray dataset, containing over 100,000 frontalview X-ray images with 14 diseases. Four practicing academic radiologists annotate a test set, on which we compare the performance of CheXNet to that of radiologists. We find that CheXNet exceeds average radiologist performance on the F1 metric. We extend CheXNet to detect all 14 diseases in ChestX-ray14 and achieve state of the art results on all 14 diseases." }
1909.02077
1711.05225
Comparison to Prior Work
We compare against the single-stage high-capacity approaches of CheXNet #REFR and Wang et al.
[ "We evaluate general fracture classification performance using five-fold crossvalidation with a 70%/10%/20% training, validation, and testing split, respectively." ]
[ "#OTHEREFR , both of which use DenseNet-121 as backbones and apply global average pooling (GAP) and LSE pooling, respectively.", "Note, that unlike our first stage of §2.1, the pooling is applied to the last feature map.", "We also compare against the single-stage lower-capacity model of ResNet-18, using both GAP and LSE pooling heads. Fig. 3 and Tbl. 1 quantitatively summarizes these experiments.", "As can be seen, all lower-capacity models fare relatively poorly, demonstrating the need to use more descriptive models for global PXR interpretation.", "On the other hand, the first stage of our proposed method achieves an AUCROC of 0.968, compared to the 0.962 achieved by the state-of-the-art single-stage methods #OTHEREFR , demonstrating that our single-stage approach using deep MIL can already outperform prior art." ]
[ "CheXNet" ]
method
{ "title": "Weakly Supervised Universal Fracture Detection in Pelvic X-rays", "abstract": "Abstract. Hip and pelvic fractures are serious injuries with life-threatening complications. However, diagnostic errors of fractures in pelvic X-rays (PXRs) are very common, driving the demand for computer-aided diagnosis (CAD) solutions. A major challenge lies in the fact that fractures are localized patterns that require localized analyses. Unfortunately, the PXRs residing in hospital picture archiving and communication system do not typically specify region of interests. In this paper, we propose a two-stage hip and pelvic fracture detection method that executes localized fracture classification using weakly supervised ROI mining. The first stage uses a large capacity fully-convolutional network, i.e., deep with high levels of abstraction, in a multiple instance learning setting to automatically mine probable true positive and definite hard negative ROIs from the whole PXR in the training data. The second stage trains a smaller capacity model, i.e., shallower and more generalizable, with the mined ROIs to perform localized analyses to classify fractures. During inference, our method detects hip and pelvic fractures in one pass by chaining the probability outputs of the two stages together. We evaluate our method on 4 410 PXRs, reporting an area under the ROC curve value of 0.975, the highest among state-of-the-art fracture detection methods. Moreover, we show that our two-stage approach can perform comparably to human physicians (even outperforming emergency physicians and surgeons), in a preliminary reader study of 23 readers." }
{ "title": "CheXNet: Radiologist-Level Pneumonia Detection on Chest X-Rays with Deep Learning", "abstract": "We develop an algorithm that can detect pneumonia from chest X-rays at a level exceeding practicing radiologists. Our algorithm, CheXNet, is a 121-layer convolutional neural network trained on ChestX-ray14, currently the largest publicly available chest Xray dataset, containing over 100,000 frontalview X-ray images with 14 diseases. Four practicing academic radiologists annotate a test set, on which we compare the performance of CheXNet to that of radiologists. We find that CheXNet exceeds average radiologist performance on the F1 metric. We extend CheXNet to detect all 14 diseases in ChestX-ray14 and achieve state of the art results on all 14 diseases." }
1807.05275
1802.02209
II. RELATED WORK
IONet #REFR is an end-to-end learned INS that provides a continuous trajectory estimate directly from raw inertial data using an LSTM network.
[ "Furthermore, the system in #OTHEREFR requires that a moving average filter be applied to the SVM output in order to remove false-positive detections; this may erroneously remove correctly-predicted zero-velocity events for running users. Our approach differs from Park et al.", "in that we train a single model for zero-velocity detection that operates independently of the current motion type.", "This allows our model to generalize to motions that cannot easily be discretized into predefined classes.", "We draw further inspiration from other implementations that use deep learning to process inertial data. Hannink et al.", "#OTHEREFR train a deep convolutional neural network (CNN) to regress human stride length from foot-mounted inertial data." ]
[ "While this implementation fully replaces the filtering architecture with a learned model, we believe that zero-velocity detection is an integral part of the architecture and that an end-to-end method would have difficulty reproducing the accuracy of a zero-velocity-aided system.", "Instead, we simply replace an error-prone component of the system (i.e., the zero-velocity detector) with a learned model, without modifying the remainder of the INS." ]
[ "raw inertial data" ]
method
{ "title": "LSTM-Based Zero-Velocity Detection for Robust Inertial Navigation", "abstract": "Abstract-We present a method to improve the accuracy of a zero-velocity-aided inertial navigation system (INS) by replacing the standard zero-velocity detector with a long short-term memory (LSTM) neural network. While existing threshold-based zero-velocity detectors are not robust to varying motion types, our learned model accurately detects stationary periods of the inertial measurement unit (IMU) despite changes in the motion of the user. Upon detection, zero-velocity pseudo-measurements are fused with a dead reckoning motion model in an extended Kalman filter (EKF). We demonstrate that our LSTM-based zero-velocity detector, used within a zero-velocity-aided INS, improves zero-velocity detection during human localization tasks. Consequently, localization accuracy is also improved. Our system is evaluated on more than 7.5 km of indoor pedestrian locomotion data, acquired from five different subjects. We show that 3D positioning error is reduced by over 34% compared to existing fixed-threshold zero-velocity detectors for walking, running, and stair climbing motions. Additionally, we demonstrate how our learned zero-velocity detector operates effectively during crawling and ladder climbing. Our system is calibration-free (no careful threshold-tuning is required) and operates consistently with differing users, IMU placements, and shoe types, while being compatible with any generic zero-velocityaided INS." }
{ "title": "IONet: Learning to Cure the Curse of Drift in Inertial Odometry", "abstract": "Inertial sensors play a pivotal role in indoor localization, which in turn lays the foundation for pervasive personal applications. However, low-cost inertial sensors, as commonly found in smartphones, are plagued by bias and noise, which leads to unbounded growth in error when accelerations are double integrated to obtain displacement. Small errors in state estimation propagate to make odometry virtually unusable in a matter of seconds. We propose to break the cycle of continuous integration, and instead segment inertial data into independent windows. The challenge becomes estimating the latent states of each window, such as velocity and orientation, as these are not directly observable from sensor data. We demonstrate how to formulate this as an optimization problem, and show how deep recurrent neural networks can yield highly accurate trajectories, outperforming state-of-the-art shallow techniques, on a wide range of tests and attachments. In particular, we demonstrate that IONet can generalize to estimate odometry for non-periodic motion, such as a shopping trolley or baby-stroller, an extremely challenging task for existing techniques. Fast and accurate indoor localization is a fundamental need for many personal applications, including smart retail, public places navigation, human-robot interaction and augmented reality. One of the most promising approaches is to use inertial sensors to perform dead reckoning, which has attracted great attention from both academia and industry, because of its superior mobility and flexibility (Lymberopoulos et al. 2015) . Recent advances of MEMS (Micro-electro-mechanical systems) sensors have enabled inertial measurement units (IMUs) small and cheap enough to be deployed on smartphones. However, the low-cost inertial sensors on smartphones are plagued by high sensor noise, leading to unbounded system drifts. Based on Newtonian mechanics, traditional strapdown inertial navigation systems (SINS) integrate IMU measurements directly. They are hard to realize on accuracy-limited IMU due to exponential error propagation through integration. To address these problems, stepbased pedestrian dead reckoning (PDR) has been proposed. This approach estimates trajectories by detecting steps, estimating step length and heading, and updating locations per step (Li et al. 2012) . Instead of double integrating accelerations into locations, a step length update mitigates exponential increasing drifts into linear increasing drifts. However, dynamic step estimation is heavily influenced by sensor noise, user's walking habits and phone attachment changes, causing unavoidable errors to the entire system (Brajdic and Harle 2013). In some scenarios, no steps can be detected, for example, if a phone is placed on a baby stroller or shopping trolley, the assumption of periodicity, exploited by stepbased PDR would break down. Therefore, the intrinsic problems of SINS and PDR prevent widespread use of inertial localization in daily life. The architecture of two existing methods is illustrated in Figure 2 . To cure the unavoidable 'curse' of inertial system drifts, we break the cycle of continuous error propagation, and reformulate inertial tracking as a sequential learning problem. Instead of developing multiple modules for step-based PDR, our model can provide continuous trajectory for indoor users from raw data without the need of any hand-engineering, as shown in Figure 1 . Our contributions are three-fold: • We cast the inertial tracking problem as a sequential learning approach by deriving a sequence-based physical model from Newtonian mechanics. • We propose the first deep neural network (DNN) framework that learns location transforms in polar coordinates from raw IMU data, and constructs inertial odometry regardless of IMU attachment." }
1903.01534
1802.02209
Feature Encoder
Inspired by IONet #REFR , we use a two-layer Bi-directional LSTM with 128 hidden states as the Inertial Feature Encoder f inertial .
[ "Ideally, we want the Visual Encoder f vision to learn geometrically meaningful features rather than features related with appearance or context.", "For this reason, instead of using a PoseNet model #OTHEREFR , as commonly found in other DL-based VO approaches #OTHEREFR , we use FlowNetSimple #OTHEREFR as our feature encoder.", "Flownet provides features that are suited for optical flow prediction. The network consists of nine convolutional layers.", "The size of the receptive fields gradually reduces from 7×7 to 5×5 and finally 3×3, with stride two for the first six.", "Each layer is followed by a ReLU nonlinearity except for the last one, and we use the features from the last convolutional layer a V as our visual feature: Inertial Feature Encoder: Inertial data streams have a strong temporal component, and are generally available at higher frequency (∼100 Hz) than images (∼10 Hz)." ]
[ "As shown in Figure 2 , a window of inertial measurements x I between each two images is fed to the inertial feature encoder in order to extract the dimensional feature vector a I :" ]
[ "Inertial Feature Encoder" ]
method
{ "title": "Selective Sensor Fusion for Neural Visual-Inertial Odometry", "abstract": "Deep learning approaches for Visual-Inertial Odometry (VIO) have proven successful, but they rarely focus on incorporating robust fusion strategies for dealing with imperfect input sensory data. We propose a novel end-to-end selective sensor fusion framework for monocular VIO, which fuses monocular images and inertial measurements in order to estimate the trajectory whilst improving robustness to real-life issues, such as missing and corrupted data or bad sensor synchronization. In particular, we propose two fusion modalities based on different masking strategies: deterministic soft fusion and stochastic hard fusion, and we compare with previously proposed direct fusion baselines. During testing, the network is able to selectively process the features of the available sensor modalities and produce a trajectory at scale. We present a thorough investigation on the performances on three public autonomous driving, Micro Aerial Vehicle (MAV) and hand-held VIO datasets. The results demonstrate the effectiveness of the fusion strategies, which offer better performances compared to direct fusion, particularly in presence of corrupted data. In addition, we study the interpretability of the fusion networks by visualising the masking layers in different scenarios and with varying data corruption, revealing interesting correlations between the fusion networks and imperfect sensory input data." }
{ "title": "IONet: Learning to Cure the Curse of Drift in Inertial Odometry", "abstract": "Inertial sensors play a pivotal role in indoor localization, which in turn lays the foundation for pervasive personal applications. However, low-cost inertial sensors, as commonly found in smartphones, are plagued by bias and noise, which leads to unbounded growth in error when accelerations are double integrated to obtain displacement. Small errors in state estimation propagate to make odometry virtually unusable in a matter of seconds. We propose to break the cycle of continuous integration, and instead segment inertial data into independent windows. The challenge becomes estimating the latent states of each window, such as velocity and orientation, as these are not directly observable from sensor data. We demonstrate how to formulate this as an optimization problem, and show how deep recurrent neural networks can yield highly accurate trajectories, outperforming state-of-the-art shallow techniques, on a wide range of tests and attachments. In particular, we demonstrate that IONet can generalize to estimate odometry for non-periodic motion, such as a shopping trolley or baby-stroller, an extremely challenging task for existing techniques. Fast and accurate indoor localization is a fundamental need for many personal applications, including smart retail, public places navigation, human-robot interaction and augmented reality. One of the most promising approaches is to use inertial sensors to perform dead reckoning, which has attracted great attention from both academia and industry, because of its superior mobility and flexibility (Lymberopoulos et al. 2015) . Recent advances of MEMS (Micro-electro-mechanical systems) sensors have enabled inertial measurement units (IMUs) small and cheap enough to be deployed on smartphones. However, the low-cost inertial sensors on smartphones are plagued by high sensor noise, leading to unbounded system drifts. Based on Newtonian mechanics, traditional strapdown inertial navigation systems (SINS) integrate IMU measurements directly. They are hard to realize on accuracy-limited IMU due to exponential error propagation through integration. To address these problems, stepbased pedestrian dead reckoning (PDR) has been proposed. This approach estimates trajectories by detecting steps, estimating step length and heading, and updating locations per step (Li et al. 2012) . Instead of double integrating accelerations into locations, a step length update mitigates exponential increasing drifts into linear increasing drifts. However, dynamic step estimation is heavily influenced by sensor noise, user's walking habits and phone attachment changes, causing unavoidable errors to the entire system (Brajdic and Harle 2013). In some scenarios, no steps can be detected, for example, if a phone is placed on a baby stroller or shopping trolley, the assumption of periodicity, exploited by stepbased PDR would break down. Therefore, the intrinsic problems of SINS and PDR prevent widespread use of inertial localization in daily life. The architecture of two existing methods is illustrated in Figure 2 . To cure the unavoidable 'curse' of inertial system drifts, we break the cycle of continuous error propagation, and reformulate inertial tracking as a sequential learning problem. Instead of developing multiple modules for step-based PDR, our model can provide continuous trajectory for indoor users from raw data without the need of any hand-engineering, as shown in Figure 1 . Our contributions are three-fold: • We cast the inertial tracking problem as a sequential learning approach by deriving a sequence-based physical model from Newtonian mechanics. • We propose the first deep neural network (DNN) framework that learns location transforms in polar coordinates from raw IMU data, and constructs inertial odometry regardless of IMU attachment." }
2001.04061
1802.02209
B. Inertial Odometry Neural Networks
Inertial Odometry Neural Networks (IONet) #REFR are able to learn user's ego-motion directly from raw inertial data and solve more general motions.
[]
[ "For example, tracking a trolley or other wheeled configurations is quite challenging for PDR models, due to the fact that no walking step or periodicity patterns can be detected in this case.", "In contrast, IONet can regress the location transformation (the average speed) during any fixed window of time, without the explicit components of step detection and step length estimation as in PDRs.", "We implemented and trained the IONet model on the OxIOD dataset, to show the effectiveness of OxIOD for data-driven approaches.", "The continuous inertial readings are segmented into independent sequences of n frames IMU data {(a i , w i )} n i=1 , consisting of 3-dimensional accelerations a i ∈ R 3 and 3-dimensional angular rates w i ∈ R 3 at the time step i.", "The 6-dimensional inertial data are preprocessed to normalise the accelerations and angular rates into a same scale." ]
[ "Inertial Odometry Neural" ]
background
{ "title": "Deep Learning based Pedestrian Inertial Navigation: Methods, Dataset and On-Device Inference", "abstract": "Modern inertial measurements units (IMUs) are small, cheap, energy efficient, and widely employed in smart devices and mobile robots. Exploiting inertial data for accurate and reliable pedestrian navigation supports is a key component for emerging Internet-of-Things applications and services. Recently, there has been a growing interest in applying deep neural networks (DNNs) to motion sensing and location estimation. However, the lack of sufficient labelled data for training and evaluating architecture benchmarks has limited the adoption of DNNs in IMU-based tasks. In this paper, we present and release the Oxford Inertial Odometry Dataset (OxIOD), a first-ofits-kind public dataset for deep learning based inertial navigation research, with fine-grained ground-truth on all sequences. Furthermore, to enable more efficient inference at the edge, we propose a novel lightweight framework to learn and reconstruct pedestrian trajectories from raw IMU data. Extensive experiments show the effectiveness of our dataset and methods in achieving accurate data-driven pedestrian inertial navigation on resource-constrained devices." }
{ "title": "IONet: Learning to Cure the Curse of Drift in Inertial Odometry", "abstract": "Inertial sensors play a pivotal role in indoor localization, which in turn lays the foundation for pervasive personal applications. However, low-cost inertial sensors, as commonly found in smartphones, are plagued by bias and noise, which leads to unbounded growth in error when accelerations are double integrated to obtain displacement. Small errors in state estimation propagate to make odometry virtually unusable in a matter of seconds. We propose to break the cycle of continuous integration, and instead segment inertial data into independent windows. The challenge becomes estimating the latent states of each window, such as velocity and orientation, as these are not directly observable from sensor data. We demonstrate how to formulate this as an optimization problem, and show how deep recurrent neural networks can yield highly accurate trajectories, outperforming state-of-the-art shallow techniques, on a wide range of tests and attachments. In particular, we demonstrate that IONet can generalize to estimate odometry for non-periodic motion, such as a shopping trolley or baby-stroller, an extremely challenging task for existing techniques. Fast and accurate indoor localization is a fundamental need for many personal applications, including smart retail, public places navigation, human-robot interaction and augmented reality. One of the most promising approaches is to use inertial sensors to perform dead reckoning, which has attracted great attention from both academia and industry, because of its superior mobility and flexibility (Lymberopoulos et al. 2015) . Recent advances of MEMS (Micro-electro-mechanical systems) sensors have enabled inertial measurement units (IMUs) small and cheap enough to be deployed on smartphones. However, the low-cost inertial sensors on smartphones are plagued by high sensor noise, leading to unbounded system drifts. Based on Newtonian mechanics, traditional strapdown inertial navigation systems (SINS) integrate IMU measurements directly. They are hard to realize on accuracy-limited IMU due to exponential error propagation through integration. To address these problems, stepbased pedestrian dead reckoning (PDR) has been proposed. This approach estimates trajectories by detecting steps, estimating step length and heading, and updating locations per step (Li et al. 2012) . Instead of double integrating accelerations into locations, a step length update mitigates exponential increasing drifts into linear increasing drifts. However, dynamic step estimation is heavily influenced by sensor noise, user's walking habits and phone attachment changes, causing unavoidable errors to the entire system (Brajdic and Harle 2013). In some scenarios, no steps can be detected, for example, if a phone is placed on a baby stroller or shopping trolley, the assumption of periodicity, exploited by stepbased PDR would break down. Therefore, the intrinsic problems of SINS and PDR prevent widespread use of inertial localization in daily life. The architecture of two existing methods is illustrated in Figure 2 . To cure the unavoidable 'curse' of inertial system drifts, we break the cycle of continuous error propagation, and reformulate inertial tracking as a sequential learning problem. Instead of developing multiple modules for step-based PDR, our model can provide continuous trajectory for indoor users from raw data without the need of any hand-engineering, as shown in Figure 1 . Our contributions are three-fold: • We cast the inertial tracking problem as a sequential learning approach by deriving a sequence-based physical model from Newtonian mechanics. • We propose the first deep neural network (DNN) framework that learns location transforms in polar coordinates from raw IMU data, and constructs inertial odometry regardless of IMU attachment." }