_id
stringlengths
36
36
text
stringlengths
200
328k
label
stringclasses
5 values
71abf0e5-4c4f-454a-a4a1-432b3c8fdd7f
Table REF presents an overview of all data smells along with their distribution. The remainder of this report frequently refers to these smells by their unique key which is also provided here. The redundant and categorical value smells are the most common categories with a total occurrence of 33 and 17 respectively. The missing & string value smells are the least common categories with a total occurrence of 13 and 12 respectively. red-corr is the most common smell, observed in 19 of the 25 datasets. The remaining top five smells include cat-hierarchy, miss-null, red-uid and misc-unit which are observed in more than 10 datasets. The least frequently observed smells include misc-balance, str-human, misc-sensitive, miss-sp-val and miss-bin which are observed in less than 5 datasets.
r
f1081419-22d0-4bef-8ef6-6c6e972e4d6d
Finally we also analyse the distribution of smells within the datasets. Figure REF shows a two dimensional histogram of the smells and datasets such that the intersection of datasets where a particular smell occurred is filled. The histogram is colour-coded based on the smell category which allows us to observe the most common smell categories at a glance. The figure also contains two marginal plots across the x and y axes. The marginal plot along the x axis presents a count of smells within each dataset such that we can identify the datasets with the most and least number of smells. Similarly, the marginal plot along the y axis presents a count of smells across all datasets such that we can identify the most and least frequently occurring smells. <FIGURE>
r
2d5ac67c-6736-4a5a-96b2-d5ccb4ed9f28
The remainder of this section presents the smell groups and their corresponding smells in more detail. We present examples of the smells discovered along with an explanation of the underlying problems that may arise. Where applicable, potential strategies to mitigate the problem, context in which the smells may not apply and references from literature (both scientific and grey) are also presented.
r
c51864da-13c3-4ba3-822e-835552d97a46
Code smells are frequently used by software engineers to identify potential bugs, sources of technical debt and weak design choices. Code smells in the context of traditional software have existed for over three decades and have been extensively studied by the software engineering research community. With the growing popularity of AI and its adoption in high-stakes domains where a data-centric approach is adopted, data smells are seen as a much needed aid to machine learning practitioners. This study examined 25 public datasets and identified 14 recurrent data quality issues—coined as data smells—that can lead to problems when training machine learning models. Our results indicate a need for better data documentation, and accumulation of technical debt due to lack of standardised practices in upstream stages of machine learning pipelines.
d
9a596437-2bc7-4376-956c-61540b27e825
We consider our collection of data smells and the analysis of their prevalence a first step towards aiding data scientists in the initial stages of data analysis where human involvement is necessary. We hope that our work raises awareness amongst practitioners to write better documentation for their datasets and follow best practises during data collection to minimise technical debt in upstream stages. As a next step, we aim to grow the data smells catalogue by analysing more datasets. Furthermore, we wish to remove the constraints introduced by IC2 (see Section  and Table REF ) and include datasets larger than 1GB in size.
d
2fd80a73-2ad2-4ea1-a9d3-4ee8e2051bd5
The way speech prosody encodes linguistic, paralinguistic and non-linguistic information via multiparametric representations of the speech signals is still an open issue. Most models of intonation postulate that this encoding is performed by local and salient spatio-temporal patterns such as tones, atoms or breaks inscribed into global gauges such as declinations or steps. Phonological structures are supposed to link socio-communicative functions with patterns and gauges.
i
a93868f8-4c1d-408e-9982-1d6c693c9a59
The Gestalt model proposed by Aubergé and Bailly [1]} proposes that the encoding is direct, i.e. shapes make sense, and performed by spatio-temporal patterns that both cue each socio-communicative function and its scope, i.e. the linguistic units that are involved; e.g. the element carrying emphasis, the part of the utterance carrying doubt or the target syllable of a tone. The Superposition of Functional Contours (SFC) model developed by Bailly et al [2]}, [3]}, [4]} bets that the parallel encoding of socio-communicative functions at multiple scopes is simply performed by overlapping-and-adding the function-specific spatio-temporal patterns. The problem of decomposing prosody into these elementary patterns is ill-posed since the SFC does not impose any a priori constraints of the spatio-temporal patterns such as bandwidth or shape.
i
19bf014c-0cb8-41c7-80ab-9e06ab8f61d9
These function-specific patterns, in fact, emerge from statistical modelling. Given a dataset that contains multiple instances of these patterns, the SFC extracts the shapes and their average contributions thanks to an iterative analysis-by-synthesis training process that consists of training function-specific pattern generators. They are called multiparametric contours because the generated shapes feed a multiparametric score, i.e. one including melody, rhythm, head motion, etc. The SFC has been successfully used to model different functions acting at various linguistic levels, including: attitudes [1]}, grammatical dependencies [2]}, cliticisation [3]}, focus [4]}, as well as tones in Mandarin [5]}.
i
8c4320e2-49fd-4ab5-9ba4-5e8a9300fe01
One shortcoming of the SFC model is that it is not sensitive to prominence: prosodic contours are simply superposed-and-added with no possibility of weighting their contributions. In this paper we supplement the SFC architecture with components responsible for weighting the contribution of the elementary contours in the decomposition. The weighted SFC (WSFC) consists in adjoining a weight module to each contour generator: while the contour generator still computes a multiparametric contour for each rhythmic unit of the scope, the weight module computes its contribution given the context of the scope in the utterance.
i
60a14e43-ae7e-43ea-b06f-f8709bfdeeeb
We assessed the plausibility of the proposed WSFC, and used it to explore two prosodic phenomena: i) the impact of the attitude, and ii) the impact of emphasis on the prominence of the other functional contours in the utterance. The two phenomena were explored in two different languages: French and Chinese. The results show that the integration of weighting in the contour generators is effective, relevant and robust. Also, by adding a degree of freedom to the model, it improves its modelling performance by providing more coherent contours. The whole implementation of the system has been licensed as free software and is available on GitHubhttps://github.com/bgerazov/WSFC.
i
84a1474a-1082-417f-a831-be46079e5abe
We proposed a prosody model capable of capturing the prominence of elementary prosodic contours that are based on their context of use. The WSFC has been also shown to improve the modelling performance of the SFC due to an added weighting mechanism. We have demonstrated its robustness and its usefulness in analysing the impact of attitudes and emphasis on prominence in French and Chinese. The described methodology can be used to analyse other effects of context on prominence. Moreover, the proposed WSFC architecture allows for task-specific contextual inputs. We are currently exploring realizations of attitudes in several other languages. <FIGURE>
d
5679623c-895c-4d89-b544-08ee28ceb416
When developing electrical equipment, engineers optimize initial design proposals by carefully identifying a large number of design parameters. In doing so, they rely on rules of thumb, know-how and previous experience, existing standards and, increasingly, simulation and optimization tools. Numerical optimization is used to simultaneously improve – possibly conflicting – qoi, robustness and costs. While stochastic optimization plays a major role, derivative-based deterministic optimization algorithms are becoming, again, increasingly interesting [1]}. Their advantages over stochastic methods are a faster convergence, i.e. less expensive optimization runs, and efficient coupling with mesh refinement and reduced order models. However, in case of derivative-based approaches, the problem of efficient gradient computation arises. The most common methods for gradient computation, e.g. finite differences and the dsm, are not well suited for applications with many design parameters because their computational costs scale with the number of parameters [2]}, [3]}. The am, on the other hand, has computational costs that are almost independent of the number of parameters [2]}, [5]}, [3]}. The am has previously been applied in the analysis of electric networks. The first formulation for the am in this context, which was based on Tellegen's theorem [7]}, was published by Director and Rohrer [8]}. Only since the 2000s, the am has been applied to electromagnetic problems more often and remains an active field of study [3]}, [10]}, [11]}, [12]}, [13]}, [14]}, [15]}. In the field of hv engineering, Zhang et. al. recently used the am for topology optimization of a station class surge arrester model with linear media at steady state [16]}. However, many hv devices are exposed to transient overvoltages and contain strongly nonlinear materials, so that an investigation of the steady state alone, i.e. in the frequency domain, is not sufficient [17]}, [18]}. Therefore, in this work, the am is formulated and solved numerically for the nonlinear transient eqs problem. Additionally, a method for sensitivity calculation of qoi evaluated at a given point in time is presented, since the am naturally only considers time-integrated qoi. The am is validated using an analytical example. Subsequently, a nonlinear resistively graded 320 kV hvdc cable joint under impulse operation serves as a prominent technical example. It is shown that the am is capable of computing the sensitivities of this highly transient nonlinear problem with reasonable computational effort. This is an important step towards gradient-based optimization of electric devices in hv engineering.
i
73df7a8a-a032-4666-9670-4ab7dfa012c4
In this section, the am () for transient eqs problems is validated. In the first step, the layered resistor of Fig. REF is considered, and the fe adjoint and analytic sensitivities are compared. In a second step, the method is applied to a nonlinear 320 kV cable joint specimen and the results are validated using results obtained by the dsm as a reference.
r
2ab61283-694d-426c-82a9-4e190e4ee5a5
The adjoint variable method is a method for calculating gradients of selected quantities of interest with respect to a set of design parameters. It has computational costs nearly independent of the number of design parameters and is, thus, very efficient for problems where the number of parameters is larger than the number of quantities of interest. In this work, the am is adopted for transient eqs problems with nonlinear material characteristics. The adjoint pde is presented and formulated as a 2d axisymmetric finite element problem. It is shown, how to consider quantities of interest evaluated at specific points in space or time. After validating the method against an analytic example, the method is applied to a 320 kV hvdc cable joint featuring a layer of nonlinear fgm which is exposed to an impulse overvoltage. The results of the am are validated using the dsm and it is shown that the computational costs of the am are, even for this strongly nonlinear technical example, within reasonable limits. This is an important step towards gradient-based optimization of hv equipment.
d
99bc48bb-c8c9-45b7-b0f7-9a2bce380315
blackIn recent years, cloud computing has been of great interest to researchers [1]}[2]}. Its role in providing on-demand services and resources has opened its way into a variety of technological environments, ranging from data centers [3]}, power systems [3]} and video delivery systems [3]} to earthquake command systems [6]} and intelligent transportation [3]}[8]}.
i
5acaa144-99aa-4ca6-8c41-b1a9aa12b026
A variety of design objectives including fairness [1]}, fault tolerance [2]}, energy consumption [3]}, [4]} and reliability [5]} are considered in the design of cloud computing systems. blackHowever, security is probably the most critical design objective in this field [6]}[7]}[8]}[9]}[10]}[11]}.
i
c25a8830-43c0-4629-b870-d5f6a928ddfc
blackIoT frequently appears in the ecosystem of Cloud computing. This technology integrates geographically-distributed cyber-physical devices or cyber-enabled systems with the goal of providing strategic services[1]}, [2]}. The application areas of IoT vary from transport and healthcare to agriculture and FinTech. blackSimilar to the case of Cloud computing, security is the most critical objective in the design of IoT systems [3]}[4]}[5]}[6]}.
i
a24a0276-b3e3-404c-afe0-d94899d62170
blackFigure REF introduces the icons we will use in the rest of this paper for Cloud, IoT, CAIoT and IoTBC. In this figure, overlapping parallelograms represent technologies on top of each other. As seen in the figure, the dichotomy of Cloud and IoT leads to two emerging technologies, namely CAIoT and IoTBC. CAIoT deploys IoT on top of Cloud, while in IoTBC, Cloud services are provided leveraging IoT capabilities.
i
548bf8a5-344c-4054-b3df-5cd50996fc68
blackThe dichotomy of Figure REF needs to be studied form both sides; CAIoT and IoTBC, each of which brings about a variety of challenges, issues and considerations. A comprehensive survey on each of the technologies can pave the way for further research. In this survey, we study CAIoT from a security point of view. The adoption of security controls by CAIoT gives raise to SCAIoT. The notion of SCAIoT is demonstrated in Figure REF . <FIGURE>
i
d43ec014-7e9d-4fd7-994d-0dbfd36f1954
blackWe present a survey on the literature of SCAIoT. We identify existing approaches towards the design of SCAIoT. We highlight security challenges faced by each approach. Moreover, we study the security controls used to address the challenges. We establish a layered architecture for SCAIoT, which reflects all the identified approaches. Furthermore, we develop a future roadmap for SCAIoT with a focus on the role of AI in the present and the future of this technology.
i
c2f276ae-128d-48ed-897d-6e7bbef200c4
blackThis review covered some sides and aspects of the dichotomy of cloud and IoT. The dichotomy gives raise to IoT-Based Cloud (IoTBC) and Cloud-Assisted IoT (CAIoT). This paper focused on the security of CAIoT. This research identified different approaches towards the design of secure CAIoT (SCAIoT) along with the related security challenges and controls. Our reviews led to the development of a layered architecture for SCAIoT as well as a future roadmap with a focus on the role of AI. Our work in this paper can be continued via studying IoTBC or studying the unstudied aspects of CAIoT.
d
97890967-a6f3-445d-9468-fb9d00641dc2
Nowadays, the amount of data produced doubles every two years [1]}, and the peak of the heyday of computers in the classical meaning is coming to an end. Maintaining the current momentum of technological development requires a change in the approach to computing. One of the most promising solutions is to transfer the idea of computing from the field of classical mechanics to quantum mechanics, which creates new and interesting possibilities, but at the same time poses significant challenges.
i
166d287a-3895-4e06-9b57-016f5e108ddc
In computer vision systems that for effective operation require processing of large amounts of data in real time, quantum neural networks (QNNs) can prove to be a very attractive solution. On the basis of experiments carried out in recent years, it can be observed that the network training process is becoming quicker and results are more accurate. They also show better generalization capabilities even with small amounts of training data and require several times fewer epochs compared to classical networks. QNNs were successfully used, among others, in the following applications: tree recognition in aerial space of California [1]}, cancer recognition [2]}, facial expression recognition [3]}, vehicle classification [4]}, traffic sign recognition from the LISA database considering the vulnerability of adversarial attacks [5]}, handwriting recognition [6]}, [7]}, [8]}, [9]}, [10]}, [11]}, object segmentation [12]}, [13]}, pneumonia recognition [14]}, classification of ants and bees [15]}, classification of medical images of chest radiography and retinal color of the fundus [16]}, and generative networks [17]}, [18]}.
i
257560eb-a0b5-42cb-af9c-7f89bffa2468
In this paper, we describe the results of our work on a quantum neural network for the classification of traffic signs from the German Traffic Sign Recognition Benchmark (GTSRB) dataset. The aim of our research was to analyze and compare the results obtained using a classical deep convolutional neural network (DCNN) and a quantum neural network. The architecture was implemented using the Python language and the PennyLane library for quantum computing. To the best of our knowledge, this is the only work on traffic sign classification for the popular GTSRB dataset that uses quantum computing techniques. Another paper ([1]}) on a similar topic deals with a different set and the proposed methods have lower efficiency.
i
175ac8b8-10c2-4898-a57e-24955a8b106f
The remainder of this paper is organized as follows. Section provides background information on quantum neural networks. In Section our motivation and the purpose of our experiment were described in the context of related work on machine learning applied to computer vision systems. Section presents the experiments conducted and the results obtained. In Section conclusions and ideas for further development are provided.
i
dbfa8772-8baf-4278-b5b2-17f25bcedd65
Quantum machine learning is a challenging field. This concerns not only the proper design of algorithms but also the very way in which classical real data are encoded in quantum space. Uncertainty is also provided by the fact that the only official form of a quantum computer is not known, but rather a number of proposals for what it could look like. This results in potentially going down wrong paths and having to redesign the algorithms multiple times.
d
8b762800-2257-421d-a19e-78f900bb3347
The experiment showed that it is possible to achieve a high classification accuracy (more than 94%) for a neural network with quantum convolution, but raised the question of quantum supremacy. Quantum algorithms require special preprocessing on the dataset, which in the case of hybrid networks, of which the network with quantum convolution is an example, requires significant computational resources, and the results obtained do not show the expected significant superiority over the results returned by the classical network. However, the purpose of the experiment was not to improve the chosen path, but to outline a direction on how to prototype an exemplary quantum-classical neural network model for the multiclass classification problem, which could find its application in vision systems.
d
2bfac99a-dbdf-4699-aead-458cdbefb5b3
The prospects for further development of the project are very broad. It is planned to try to compare the influence of data augmentation on learning accuracy and the potential overfitting effect in the comparison of the classical network and the network with quantum convolution. In addition, another task will be undertaken in the area of explainable AI, that is, an attempt to visualize, for example, in the form of a heat map, which features of the image influence a given classification result. It would also be interesting to try out the designed network architecture on a real quantum computer along with undertaking an analysis of the temporal performance of the algorithm.
d
99569e2b-a218-4b56-a84d-ad0bd7984003
When medical images were stored, they may have different image orientations. In the further segmentation or computing, this difference may affect the results, since current deep neural network (DNN) systems generally only take the input and output of images as matrices or tensors, without considering the imaging orientation and real world coordinate. So it is crucial to recognize it before the further computing. This work is aimed to provide a study of the Cardiac Magnetic Resonance (CMR) image orientation, for fitting the coordinate system of human reality, and to develop an efficient method for recognition of the orientation.
i
70a58b33-aeb6-40d4-99f0-281764cecdd9
Deep neural network has performed outstandingly in computer vision and gradually replaced the traditional methods. DNN also take a important role in medical image processing, such as image segmentation[1]} and myocardial pathology analysis[2]}. For CMR images, standardization of all the images is a prerequisite for further computing tasks based on DNN-based methodologies.
i
31507f62-7eaa-4e55-87a8-20e00e5598d9
Most studies in the field of medical image processing have only focused on the further computing, so they have to spend a lot of manpower to do the preprocess. If we can auto adjust the images, it will save lots of time. Nevertheless, recognizing the orientation of different modality CMR images and adjusting them into standard format could be as challenging as the further computing tasks[1]}. In a broad sense, recognition orientation is also a kind of image classification task, so DNN is of no doubts an effective way to solve this problem. In this work, we still use DNN as our main method.
i
0edb501b-fe1e-499c-9257-bb108d8c07e5
In most image classification problem like ImageNet, we do some transformation to the image but these transformation do not change the label, for example we rotate a dog image and it's still a dog image. However, the orientation could be changed if we do transformation like flipping to the images. In this work, we utilize this character to built a predicting model. Combining DNN and predicting model, we built a framework for recognition of Orientation.
i
8f587d5b-0f39-4809-8ac1-232d5db799d4
This work is aimed at designing a DNN-based approach to achieve orientation recognition for multiple CMR modalities. Figure REF presents the pipeline of our proposed method. The main contributions of this work are summarized as follows: <FIGURE>
i
9799c25e-4674-4e3f-884a-48d83cb4c135
We propose a scheme to standardize the CMR image orientation and categorize all the orientations for classification. We present a DNN-based orientation recognition method for CMR image and transfer it to other modalities. We propose a predicting method to improve the accuracy for orientation recognition.
i
7f9acd56-5d71-4d52-b8f1-5682eea2c1dd
In this section, we introduce our proposed method for orientation recognition. Our proposed method is based on Deep Neural Network which was proved effective in image classification. In CMR Image Orientation Categorization, we improved the predicting accuracy by the following four steps. Firstly, we apply invertible operators to the image to get another 7 images. Then we predict these images and get 8 orientations. Finally, we use inverse transformation to these orientations and then vote to get the result.
m
f6c4432c-d36f-4bfc-ad03-2cafbe4e4d7d
DNN model get quite high accuracy in recognition of CMR image orientation and transfer learning make it easy to be transferred to other modalities . Thanks to the data expansion and augmentation, the model only need a few data. The improved prediction we proposed further increase the accuracy. We are sure that DNN model combining with transfer learning and improved prediction can be used in other recognition of orientation tasks.
d
5205039c-51a9-485d-911a-5a63b2de732d
Invariance against different transformations is crucial in many tasks such as image classification and object detection. Previous works have addressed this challenge, from early work on feature descriptors [1]} to modeling geometric transformations [2]}. It is also very beneficial if the network can detect the important content in the image and distinguish it from the rest [3]}. To this end, there are different approaches such as searching through region proposals in object detection [4]}, [5]}, and using various attention mechanisms for both classification and detection tasks [6]}, [7]}, [8]}, [9]}.
i
b6cd0814-b5c9-4990-9df3-5b65d0e02911
With recent advance in deep learning, there has been a breakthrough in various areas of Computer Vision mainly caused by the advances in Convolution Networks [1]}, [2]}. Introducing deeper and more complex classification network architectures [3]}, [4]}, [5]} has led to achieving high accuracy in challenging datasets such as ImageNet [6]}. However, another approach for improving the performance is to simplify the classification by transforming the input image [7]}. Hence, an important question to ask is "what are the suitable transformations?".
i
c2a0ce98-5e57-45cf-9412-08e20d0805d2
In [1]}, the authors introduced the STN method for improving the classification accuracy. In STN, a network is trained to generate parameters of an affine transformation which is applied to the input image. They showed that this modification simplified the task and improved the performance. In their work, affine parameters were searched locally by differentiating the classification loss and backpropagating the gradients through a sub-differentiable sampling module.
i
1a741b1e-cd55-4c8c-b3df-4ee5fa8ce12f
Similar to STN, we address improving the classifier accuracy by applying an affine transformation to the input. Different from their approach, we model the task as a Markovian Decision Process. We break the affine transformation to a sequence of discrete and simple transformations and use RL to search for a combination of transformations which minimizes the classification error. This way, the task is simplified to a search problem in discrete search space. Using RL, we are not dependent on differentiability of different sampling modules and not limited to minimizing the classification loss as the optimization objective.
i
a70d37e9-e4d0-4a84-88bb-08bde0a2d0cf
Since the breakthrough in RL [1]}, many works have successfully utilized it for solving different vision problems [2]}, [3]}, [4]}, [5]}. Combining RL methodology with deep learning as well as significant improvement in RL algorithms [6]}, [7]} has made it a powerful search method for different applications [2]}, [9]}. Moreover, RL can serve as a learning method which is not dependent on differentiability of the utilized modules [10]}. For example, [11]} adapted an RL solution for Image Restoration (IR), in which the goal is maximizing the Peak Signal to Noise Ratio (PSNR). For this task, they provide a set of IR tools and use RL to search for the optimal combination of applying these tools, aiming to maximize PSNR. In another application, Bahdanau at al. adapted RL for language sequence prediction [12]}. In RL framework, one can design a reward for different objectives; they used this characteristic to directly search for a sequence which maximizes the test time metrics such as BLEU score.
i
aa2962af-d1c3-4905-88cc-36ebe385204f
To sum up, we formulate the transformation task as a sequential decision-making problem, in which instead of finding a one-step transformation, the model searches for a combination of discrete transformations to improve the performance. We use RL for solving the search problem and apply both Policy Gradient and Actor-Critic algorithms [1]}, [2]}. We experiment with different reward designs including maximizing classification accuracy and minimizing the classification loss. In the following, we provide related work and the required background for our approach. Afterwards, we explain our method followed by experiments and an ablation study.
i
4ef3fc84-e2c1-4a20-a666-64766c04b4f1
Our work is mainly related to STN model and the RL algorithms that we utilize for solving the sequential transformation task. In this part, we focus on explaining the main ideas of STN approach as well as the required background about RL algorithms.
w
ede05da2-ab8b-4a48-a4c5-394c8b0ba3a1
In this section, we present the experimental setup for testing the performance of our method in improving the classification accuracy by applying a sequence of discrete transformations to the input image. We proceed with a discussion on results and an ablation study on the impact of reward design and episode length.
m
7cf1e042-483b-4981-af08-87f7700684a2
In this work, we present an extension of the STN model, in which we model the problem as a sequence of discrete transformations. We formulate finding the affine transformation as a search problem and aim to learn a combination of discrete transformations which improves the classification accuracy. We use both Policy Gradient and Actor-Critic training algorithms and compare our method with extensive experiments on cluttered MNIST and Fashion-MNIST datasets. For future work, we would like to extend this work to more complex datasets such as SVHN and PASCAL VOC. Moreover, we plan to extend our approach to more general transformations beyond geometric alterations, e.g., morphological operations; this extension can be done by merely extending the action space. Another exciting direction is adapting this method for other relevant tasks such as detection.
d
e606b00f-1be2-4dc9-8112-943fec2df957
Over 30 years ago, Johnson et.al. asked "How easy is neighbourhood search?" [1]}. They investigated the complexity of finding locally optimal solutions to NP-hard combinatorial optimisation problems. They show that, even if finding an improving neighbour (or proving there isn't one) takes polynomial time, finding a local optimum can take an exponential number of steps.
w
ffa70cef-7f46-4e95-b565-8e338446c564
[1]} considers hill-climbing using flips of \(n\) zero-one variables. If the objective values are randomly generated the number of local optima tends to grow exponentially with \(n\) . Thus the expected number of successful flips to reach a local optimum from an arbitrary point grows only linearly with \(n\) . A more general analysis of neighbourhood search in [2]} explores a variety of algorithms to reach a local optimum, but does not compare neighbourhood search against random generate-and-test. In Tovey's scenario, with only zero-one variables where the neighbourhood is defined by single flips, there are problems that require an exponential number of flips to reach a local optimum - even choosing the best neighbour each time [3]}.
w
4838a1a9-00e4-4c48-b0b3-12a6e8de6b07
[1]} analysed the average difference between a candidate solution and its neighbours, for five well-known combinatorial optimisation problems. Since this difference is positive for candidates with less than average fitness it implies that any local optima must have a better than average fitness. The result also shows how many moves it requires from an arbitrary initial value to reach a better than average fitness. However this result is for a small set of problems and tells us little about local descent from a candidate solution of better than average fitness.
w
8dfd4151-5852-4959-88ff-337f96d3c69f
The idea of counting the number of solutions at each fitness level is directly related to the concept of the density of states which applies to continuous fitness measures encountered in solid state physics [1]}. This work additionally shows how to estimate the density of states for a problem using Boltzmann strategies.
w
b9a7239f-1565-41b1-9093-0512d711cf32
Neighbours' similar fitness (NSF) is related to the idea of fitness correlation, spelled out for example in the paper "Correlated and Uncorrelated Fitness Landscapes and How to Tell the Difference" [1]}. In this paper random tours (each pair of points on the tour being neighbours) are used to predict the fitness of a point as a linear combination of the fitness of preceding points on the tour. Another related concept is "fitness distance correlation" [2]}, which is a measure in a landscape of the correlation between the fitness of a point and its distance from the nearest global optimum. FDC is a measure designed to aid a particular class of algorithms, dependent on a global property of the landscape.
w
2d79a00f-bf3c-4f2d-84c5-a64dbeeb397e
These notions of correlation are tied to the landscape structure, in contrast to NSF which only applies to the immediate neighbours of points with a given fitness. NSF holds of many well-known neighbourhood operators, and we suggest that it is even a criterion used in designing such operators.
w
a214d764-b6f9-430a-b221-137f30ddc76b
Artificial Intelligence (AI) systems make errors. They should be corrected without damage of existing skills. The problem of non-destructive correction arises in many areas of research and development, from AI to mathematical neuroscience, where the reverse engineering of the brain ability to learn on-the-fly remains a great challenge. It is very desirable that the corrector of errors is non-iterative (one-shot) because iterative re-training of a large system requires much time and resource and cannot be done immediately without impeding activity.
i
619a20ac-0317-48a9-a855-2c6209f83ebf
The non-desrructive correction requires separation of the situations (samples) with errors from the samples corresponding to correct behavior by a simple and robust classifier. Linear discriminants introduced by [1]} are simple, robust, require just the inverse covariance matrix of data, and may be easily modified for assimilation of new data. [2]} revived the common interest in linear classifiers. His works sparked intensive scientific debate [3]} and gave rise to development of numerous crucial concepts such as e.g. Vapnik-Chervonenkis theory [4]}, learnability [5]}, and generalization capabilities of neural networks [6]}, [7]}. Linear functionals (adaptive summators) are basic building blocks of significantly more sophisticated AI systems such as e.g. multi-layer perceptrons, [8]}, Convolutional Neural Networks [9]}, [10]} and their derivatives. Much is known about linear functionals as “stand-alone” learning machines, including their generalization margins [11]}, [6]} and numerous methods for their construction: linear discriminants and regression, perceptron learning, and Support Vector Machines [13]} among others.
i
e48bbde4-5e39-4a61-b92e-92013ffc8ede
In this work, we demonstrate that in high dimensions and even for exponentially large samples, linear classifiers in their classical Fisher's form are powerful enough to separate errors from correct responses with high probability and to provide efficient solution to the non-destructive corrector problem. We prove that linear functionals, as learning machines, have surprising and, as far as we are concerned, new peculiar extremal properties: in high dimension, with probability \(p>1-\vartheta \) and for \(M<a\exp (b{n})\) with \(a,b>0\) every point in random i.i.d. drawn \(M\) -element sets in \(\mathbb {R}^n\) is linearly separable from the rest. Moreover, the separating linear functional can be found explicitly, without iterations. This property holds for a broad set of relevant distributions, including products of probability measures with bounded support and equidistribution in a unit ball, providing mathematical foundations for one-trial correction of legacy AI systems (cf. [1]}).
i
cd6de488-c4b8-4b35-9001-af1d55c738d3
A problem of data fusion in multiagent systems has clear similarity to the problem of non-destructive correction. According to [1]}, data collected by different agents may not be naively combined due to changes in the context, and special procedures for their assimilation without damage of gained skills are needed. The proven stochastic separation effects can be used to approach this problem. They also shed light on the possible origins of remarkable selectivity to stimuli observed in-vivo in the real brain [2]}.
i
b55aa05e-cb15-4b30-ac43-a21b04962d60
Let us start from the equidistribution in the unit ball \(\mathbb {B}_n\) in \(\mathbb {R}^n\) . The probability \(p\) that a random point belongs to a layer \(\mathbb {B}_n \setminus r\mathbb { B}_n\) (\(0<r<1\) ) between spheres of radius 1 and of radius \(r\) is \(p=1-r^n\) . Let us take a unit vector \( v\) . The probability that the projection of a random vector \(x\) on \( v\) , \(( x, v)\) , exceeds \(r\) can be estimated from above by half of the ratio of volumes of balls of radii \(\rho =\sqrt{1-r^2}\) and 1 (see Fig. REF with \(\varepsilon =1-r\) ): \( \mathbf {P}(( x, v)>r)\le 0.5 \rho ^{n}\) .
r
d316696e-d1e2-4457-b42b-bec2879a19de
Theorem 1 Let \(\lbrace x_1, \ldots , x_M\rbrace \) be a set of \(M\) i.i.d. random points from the equidustribution in the unit ball \(\mathbb {B}_n\) , \(0<r<1\) . Then \(\begin{split}&\mathbf {P}\left(\Vert x_M\Vert >r \mbox{ and } \left(x_i,\frac{x_M}{\Vert x_M\Vert }\right)<r \mbox{ for all } i\ne M \right)\\& \ge 1-r^n-0.5(M-1) \rho ^{n};\end{split}\) \(\begin{split}&\mathbf {P}\left(\Vert x_j\Vert >r \mbox{ and } \left(x_i,\frac{x_j}{\Vert x_j\Vert }\right)<r \mbox{ for all } i,j, \, i\ne j\right)\\& \ge 1-Mr^n-0.5M(M-1)\rho ^{n};\end{split}\) \(\begin{split}&\mathbf {P}\left(\Vert x_j\Vert >r \mbox{ and } \left(\frac{x_i}{\Vert x_i\Vert },\frac{x_j}{\Vert x_j\Vert }\right)<r \mbox{ for all } i,j, \,i\ne j\right)\\& \ge 1-Mr^n-M(M-1)\rho ^{n}.\end{split}\)
r
b63dbc76-697d-4198-acbf-dd43d5f817fb
The proof is based on the independence of random points \(\lbrace x_1, \ldots , x_M\rbrace \) , on the geometric picture presented in Fig. REF , and on an elementary inequality \(\mathbf {P}(A_1 \& A_2 \& \ldots \& A_m)\ge 1- \sum _i(1-\mathbf {P}(A_i))\) for any events \(A_1, \ldots , A_m\) . In Fig. REF we should take \(\varepsilon =1-r\) and the external radius of the spherical layer \(A\) is 1. [1]} provides more geometric details of concentration of the volume of high-dimensional balls. In (REF ) we estimate the probability that the cosine of the angles between \(x_i\) and \(x_j\) does not exceed \(r\) . [2]} analyzed the asymptotic behavior of these estimations for small \(r\) . The idea of almost orthogonal bases was introduced by [3]} and used efficiently by [4]} for estimation of the cardinality of \(\varepsilon \) -nets in compact convex subsets of Hilbert spaces including the sets of functions computable by perceptrons. <FIGURE>
r
b704cc5a-d799-410c-840b-8fd2b4d03960
Corollary 1 Let \(\lbrace x_1, \ldots , x_M\rbrace \) be a set of \(M\) i.i.d. random points from the equidustribution in the unit ball \(\mathbb {B}_n\) and \(0<r,\vartheta <1\) . If \(M<2({\vartheta -r^n})/{\rho ^{n}},\)
r
0990b106-6226-4b6e-9aa3-81d48116652d
Remark 1 According to (REF ) the pre exponential factor in the estimate for \(M^2\) may be chosen as \(\vartheta \) , while the exponent depends on \(r\) only. For example, for \(r=1/\sqrt{2}\) the simple sufficient condition (REF ) gives \(M^2<\frac{2}{3}\vartheta 2^{n/2}\) . For \(\vartheta =0.01\) (or specificity 99%) and \(n=100\) we get \(M<2,740,000\) .
r
d8d555aa-9d50-4106-a110-009f93ae4ed5
Thus, if we select 2,700,000 i.i.d. points from an equidistribution in a unit ball in \(\mathbb {R}^{100} \) then with probability \(p>0.99\) all these points will be vertices of their convex hull.
r
24dd9598-7246-479a-8139-8edffb22881e
Estimates similar to (REF ), (REF ), and (REF ) are useful for the equidistribution of the normalized data on a unit sphere too. This is because they not only establish the fact of separability but also specify separation margins.
r
fcd42cd4-2959-4d14-9cea-70f674941e59
Consider a product distribution in an \(n\) -dimensional unit cube. Let the coordinates of a random point, \(X_1, \ldots , X_n\) (\(0 \le X_i \le 1\) ) be independent random variables with expectations \(\overline{X}_i\) and variances \(\sigma _i^2>\sigma _0^2>0\) . Let \(\overline{x}\) be a vector with coordinates \(\overline{X}_i\) . For large \(n\) , this distribution is concentrated in a relatively small vicinity of a sphere with an arbitrary centre \(c\) with coordinates \(c_i\) and radius \(R\) , where \(R^2=\mathbf {E}\left(\sum _i (X_i-c_i)^2\right)=\sum _i \sigma _i^2 + \Vert \overline{x}-c\Vert ^2.\)
r
d0e1cc71-e819-4987-b51c-687c6581ca35
Concentration near the spheres with different centres implies concentration in the vicinity of their intersection (an example of the `waist concentration' [1]}). The vicinity of the spheres, where the distribution is concentrated, can be estimated by the Hoeffding inequality [2]}. Let \(Y_1, \ldots , Y_n\) be independent bounded random variables: \(0 \le Y_i \le 1\) . The empirical mean of these variables is defined as \(\overline{Y} = \frac{1}{n}(Y_1 + \cdots + Y_n)\) . Then \(\begin{split}&\mathbf {P} \left(\overline{Y} - \mathbf {E}\left[\overline{Y} \right] \ge t \right) \le \exp \left(-2n t^2 \right); \\&\mathbf {P} \left(\left|\overline{Y} - \mathbf {E}\left[\overline{Y} \right] \right| \ge t \right) \le 2\exp \left(-2nt^2 \right).\end{split}\)
r
053bb538-8640-4f07-945a-3b43bc629d69
Let us take \(Y_i=(X_i-c_i)^2\) . Consider the centres located in the cube, \(0\le c_i \le 1\) . Then \(0\le Y_i \le 1\) and \(\mathbf {E}\left[\overline{Y} \right]=\frac{1}{n}R^2\) . In particular, if \(c_i=\overline{X}_i\) then \(\mathbf {E}\left[\overline{Y} \right]=\frac{1}{n}R_0^2\) (the minimal possible value), where \(R_0^2=\sum _i \sigma _i^2\ge n\sigma _0^2\) . In general, \(n\sigma _0^2\le R^2 \le n\) .
r
5720dcd9-d7d1-4bda-abd3-10a804fc3145
With probability \(p>1- 2\exp \left(-2nt^2 \right)\) a random point \(x\) belongs to the spherical layer (\(\delta ={nt}/{R_0^2}\) , \(t=\delta R_0^2/n\) ): \(1-\delta \le {\Vert x-\overline{x}\Vert ^2}/{R_0^2}\le 1+\delta .\)
r
1a32c8c5-1590-40a8-9711-b5f19e67b769
Consider \(M\) i.i.d. points \(\lbrace x_1, \ldots , x_M\rbrace \) from the product distribution. With probability \(p>1- 2M\exp \left(-2nt^2 \right)\) they all belong to the spherical layer (REF ). Therefore, with this probability we return to the situation presented in Fig. REF with internal radius \(\sqrt{1-\delta }R_0\) and external radius \(\sqrt{1+\delta }R_0\) . The difference from the equidistribution in the ball is that the volume of the ball is concentrated near the external sphere, while the distribution in the layer (REF ) is concentrated around the sphere \(\Vert x-\overline{x}\Vert ^2=R_0^2\) .
r
cb8af0e7-20ec-43db-9e9a-50459bcc8599
The radius of ball \(B\) is defined by \(\rho ^2=(1+\delta )R_0^2-(1-\delta )R_0^2=2\delta R_0^2\) . The concentration radius (REF ) for the spheres concentric with the ball \(B\) (Fig. REF ) is defined by \(R^2=R_0^2+(1-\delta )R_0^2=(2-\delta )R_0^2\) . Therefore, a random point does not belong to the ball \(B\) with probability \(p>1- \exp \left(-2n\tau ^2 \right)\) , where \(\tau =\frac{1}{n}(R^2-\rho ^2)=\frac{1}{n}(2-3 \delta )R^2_0\) . Thus, we get the following statement.
r
5fa71ae2-a5ad-470c-b776-34a923f1c224
Theorem 2 Let \(\lbrace x_1, \ldots , x_M\rbrace \) be i.i.d. random points from the product distribution in a unit cube, \(0< \delta <2/3\) . Then \(\begin{split}&\mathbf {P}\left(1-\delta \le \frac{\Vert x_j-\overline{x}\Vert ^2}{R^2_0}\le 1+\delta \mbox{ and } \right. \\& \left.\left(\frac{x_i-\overline{x}}{R_0},\frac{x_M-\overline{x}}{\Vert x_M-\overline{x}\Vert }\right)<\sqrt{1-\delta } \mbox{ for all } i,j, \, i\ne M \right)\\&\ge 1- 2M\exp \left(-2\delta ^2 R_0^4/n \right) -(M-1)\exp \left(-2R_0^4(2-3 \delta )^2/n\right);\end{split}\) \(\begin{split}&\mathbf {P}\left(1-\delta \le \frac{\Vert x_j-\overline{x}\Vert ^2}{R^2_0}\le 1+\delta \mbox{ and } \right. \\&\left.\left(\frac{x_i-\overline{x}}{R_0},\frac{x_j-\overline{x}}{\Vert x_j-\overline{x}\Vert }\right)<\sqrt{1-\delta } \mbox{ for all } i,j, \, i\ne j \right)\\&\ge 1- 2M\exp \left(-2\delta ^2 R_0^4/n \right) -M(M-1)\exp \left(-2R_0^4(2-3 \delta )^2/n\right)\end{split}\)
r
0b04f87e-d8f0-42e9-8ff5-f8bd30ab39a8
When the value of delta is chosen as \(\delta =0.5\) and \(R_0\) is replaced with its estimate from below, \(R^2_0\ge n \sigma _0^2\) , inequalities (REF ) and (REF ) result in the following estimates: \(\begin{split}&\mathbf {P}\left(\frac{1}{2} \le \frac{\Vert x_j-\overline{x}\Vert ^2}{R^2_0}\le \frac{3}{2} \mbox{ and } \right. \\& \left.\left(\frac{x_i-\overline{x}}{R_0},\frac{x_M-\overline{x}}{\Vert x_M-\overline{x}\Vert }\right)<\sqrt{1-\delta } \mbox{ for all } i,j, \, i\ne M \right)\\&\ge 1-3M\exp \left(-0.5 n\sigma _0^4 \right);\end{split}\) \(\begin{split}&\mathbf {P}\left(\frac{1}{2} \le \frac{\Vert x_j-\overline{x}\Vert ^2}{R^2_0}\le \frac{3}{2} \mbox{ and } \right. \\&\left.\left(\frac{x_i-\overline{x}}{R_0},\frac{x_j-\overline{x}}{\Vert x_j-\overline{x}\Vert }\right)<\sqrt{1-\delta } \mbox{ for all } i,j, \, i\ne j \right)\\&\ge 1-M(M+1)\exp \left(-0.5 n\sigma _0^4 \right).\end{split}\)
r
1abccbce-ce6c-4a38-9e4b-99464eb5ad0f
Corollary 2 Let \(\lbrace x_1, \ldots , x_M\rbrace \) be i.i.d. random points from the product distribution in a unit cube and \(0<\vartheta <1\) . If \(M<\frac{1}{3}\vartheta \exp \left(0.5 n\sigma _0^4 \right),\)
r
b454d8d2-8763-4cb8-9763-fb4e5bf6db86
then with probability \(p>1-\vartheta \) \(\begin{split}& 0.5\le \frac{\Vert x_j-\overline{x}\Vert ^2}{R_0^2} \le 1.5 \mbox{ and }\left(\frac{x_i-\overline{x}}{R_0},\frac{x_M-\overline{x}}{\Vert x_M-\overline{x}\Vert }\right)<\frac{\sqrt{2}}{2} \\&\mbox{ for all } i,j, i \ne M.\end{split}\)
r
de268d45-977f-45d2-879c-d5267ad64948
then with probability \(p>1-\vartheta \) \(\begin{split}&0.5\le \frac{\Vert x_j-\overline{x}\Vert ^2}{R_0^2} \le 1.5 \mbox{ and }\left(\frac{x_i-\overline{x}}{R_0},\frac{x_j-\overline{x}}{\Vert x_j-\overline{x}\Vert }\right)<\frac{\sqrt{2}}{2} \\& \mbox{ for all }i,j, i\ne j.\end{split}\)
r
0fb424c2-5d8f-456d-b947-e13fc663a768
The estimates (REF ), (REF ) are far from being optimal and can be improved. The main message here is their exponential dependence on \(n\) : the upper boundary of \(M\) can grow with \(n\) exponentially. Numerical experiments show that the equidistribution in cube is not worse, from the practical point of view, than the uniform distribution in a ball. To illustrate this, we empirically assessed linear separability of samples drawn from equidistributions in the unit \(n\) -cubes. For selected values of \(n\) from the set \(\lbrace 1,\dots ,5000\rbrace \) we generated 100 samples \(S\) of \(M=20 000\) random points from \([0,1]^n\) . For each sample, a sub-sample \(\underline{S}\subset S\) of \(N=4000\) points was randomly chosen, and for each point \(x_i\) in this sub-sample linear functionals \(l(x)=\left(x_i-\bar{x},x-\bar{x}\right)-\Vert x_i-\bar{x}\Vert ^{2}\) were constructed. Sings of \(l(x_j)\) , \(x_j\in S\) , \(x_j\ne x_i\) were calculated, and the numbers \(N_{-}\) of instances when \(l(x_j)< 0\) where recorded. Empirical frequencies \(N_{-}/N\) were then derived. Outcomes of this experiment are summarized in Fig. REF . These experiments demonstrate that the probability that a randomly selected point in a sample is linearly separable from the rest could be significantly higher than the simple exponential estimates provided. This, however, is not surprising as the estimates are based on the values of means and variances, and do not take into account other quantitative properties of the sample distribution. <FIGURE>
r
e461b4f4-16b8-4107-b24f-9d7c30f2cad0
In general position, a set of \(n\) points in \(\mathbb {R}^{n-1}\) is linearly separable. Therefore, if \(n-1\) or less points from \(\mathcal {M}=\lbrace x_1, \ldots , x_{M-1}\rbrace \) are not separated from \(x\) by the hyperplane \(L\) (Fig. REF ) then they can be separated by an additional hyperplane orthogonal to \(L\) . This means that \(x\) can be separated from the whole \(\mathcal {M}\) by a conjunction of two linear inequalities, \((\bullet , x/\Vert x\Vert ) >r \, \& \, (\bullet , y) > q\) , for some \(0<r<0\) , \(q>0\) , and \(y\) , \((y,x)=0\) . This system can be considered as a cascade of two independent neurons [1]}. The probability of such a two-neuron separability is higher than of linear separability. (Compare inequality (REF ) in the following theorem to (REF ).)
r
d55cf73a-217a-4b06-9fdc-54cc76ba4992
Theorem 3 Let \(S=\lbrace x_1, \ldots , x_M\rbrace \) be a set of \(M\) i.i.d. random points from the equidustribution in the unit ball \(\mathbb {B}_n\) , \(0<r<1\) . Then \(\begin{split}&\mathbf {P} \left(\Vert x_M\Vert >r \, \&\, \left(x_i,\frac{x_M}{\Vert x_M\Vert }\right)<r\mbox{ for at least } M\!-\!n \mbox{ points } x_i\in S\right) \\& \ge (1-r^n)(1-0.5\rho ^{n})^{M-1} \\&\times \left(1-\frac{1}{n!}\left(\frac{0.5(M-n)\rho ^{n}}{1-0.5\rho ^{n}}\right)^n\right) \exp {\left[\frac{0.5(M-n)\rho ^{n}}{1-0.5\rho ^{n}}\right]},\end{split}\)
r
31334b4e-a50e-4386-a8b4-57a5d2b11d6c
For \(r=1/\sqrt{2}\) , \(n=100\) , and \(M=2,74\cdot 10^6\) , (REF ) gives: \(\mathbf {P} \left(\Vert x_M\Vert >r \& \left(x_i,\frac{x_M}{\Vert x_M\Vert }\right)<r\mbox{ for at least } M\!-\!n \ \ x_i\ \in S\right) \ge 1-\theta \) with \(\theta <5\cdot 10^{-14}\) . The probability stays close to 1 for much larger values of \(M\) , as setting \(M=7\cdot 10^{16}\) results in the estimate: \(\mathbf {P} \left(\Vert x_M\Vert >r \& \left(x_i,\frac{x_M}{\Vert x_M\Vert }\right)<r\mbox{ for at least } M\!-\!n \ \ x_i\in S\right) \ge 1-\theta \) with \(\theta <5\cdot 10^{-9}\) .
r
2b11e159-c3ee-4f84-8df2-67a5900bba3e
Classical measure concentration theorems state that random points are concentrated in a thin layer near a surface (a sphere, an average or median level set of energy or another function, etc.). The stochastic separation theorems describe thin structure of these thin layers: the random points are not only concentrated in a thin layer but are all linearly separable from the rest of the set even for exponentially large random sets. The estimates are produced for two classes of distributions in high dimension: for equidistributions in balls or ellipsoids or for the product distributions with compact support (i.e. for the case when coordinates are bounded independent random variables). Numerous generalisations are possible, for example:
d
1b966857-df05-454d-a256-cd50eba2eb26
Relax the requirement of independent coordinates in Theorem REF to that of weakly dependent vector-valued variables; Instead of equidistributions, consider distributions with strongly log-concave probability densities; Use various simple and robust nonlinear classifiers like small neural cascades (compare to Theorem REF ), algebraic classifiers and other families. For these generalisations, the VC dimension is expected to play the role similar to dimension \(n\) in Theorems REF and REF .
d
45806a0e-ac02-4d53-bc91-5cc1b5c41fde
Stochastic separation Theorems 1–3 are important for synthesis and one-shot correction of AI systems. For example, inequalities (REF ) and (REF ) evaluate the probability that a randomly selected point \(x_M\) is linearly separable from all other \(M-1\) points by the linear functional \(l(x)=(x,x_M-\overline{x})\) . This separation is sufficient to correct a mistake of a legacy AI system without any re-learning and modification of existing skills [1]}. Such measure concentration effects reveal the hidden geometric background of the reported success of randomized neural networks models [2]}.
d
c240e12f-8c95-4e44-a168-16a35facfc6b
Stochastic separation theorems can simplify high-dimensional data analysis and generate the 'blessing of dimensionality' [1]}. For example, according to (REF ), in a dataset with 100 attributes and \(M<2.7\cdot 10^6\) samples we should not be surprised to observe the linear separability of each sample from the rest of the database by the inequalities \(\langle x_i,x_j\rangle <\sqrt{\frac{1}{2}\langle x_i,x_i\rangle }\) (\(i\ne j\) ) in the Mahalanobis inner product \(\langle x, y\rangle =(x, S^{-1}y)\) , where \(S\) is the empirical covariance matrix. The Mahalanobis inner product is used for `whitening', i.e. for transformation of the data cloud into the spherical form. Of course, these attributes should not be highly correlated and the empiric covariance matrix should be invertible.
d
32425a46-5b43-4fb0-b1bb-0f5f3fa910a8
We analysed separation of random points from random sets. This is the problem of single correction of a legacy AI system. The question of generalisability of this correction is of great practical importance. It leads to a problem of separation of two random sets. A simple series of generalisations can be immediately produced from Theorems 1-3 for separation of an \(M\) -element random set \(S=\lbrace x_1, \ldots , x_M\rbrace \subset \mathbb {R}^{n}\) from a \(k\) -element one \(\lbrace y_1, \ldots , y_k\rbrace \) for \(k<n\) . For this purpose, we can consider a linear space \(E=\mathrm {span}\lbrace y_i-y_1 \, | \, i=2,\ldots , k\rbrace \) and study separation of a point from an \(M\) -element set in the projection onto the quotient space \(\mathbb {R}^{n}/E\) . If \({y_1,\ldots ,y_k}\) are independent then separation would likely be limited to sets of small cardinality \(k<n\) . If, in contrast, \({y_1,\ldots ,y_k}\) are pair-wise positively correlated then we can expect that a single functional would separate them from \(S\) , with reasonable probability even for some \(k\ge n\) . This naturally gives rise to generalization of “corrections”.
d
8c778edc-e351-48f0-9616-44850c442962
The reported extreme separation capabilities of linear functionals offer new insights into the Grandmother cell or concept cell phenomena that are broadly reported in neuroscience [1]}, [2]}. The essence of the phenomenon is that some neurons in the human brain respond unexpectedly selective to particular persons or objects. Strikingly, not only is the brain able to respond selectively to “rare” individual stimuli but also such selectivity can be learnt very rapidly from a limited number of experiences [3]}. The question is: Why small ensembles of neurons may deliver such a sophisticated functionality reliably? Stochastic separation Theorems 1-3 provide a possible answer. If we accept that a) linear functionals followed by nonlinear threshold-modulated response as phenomenological models of cells whose activity was measured, b) the number of inputs converging to these cells is large enough, and c) they are statistically independent, then extreme selectivity of responses of such models follows immediately from Theorem 2.
d
f3075dcd-05d1-485d-a3fc-ae66b8526b7b
Recommender systems aim to provide users with personalized products or services. They can help handle the increasing online information overload problem and improve customer relationship management. Collaborative Filtering (CF) is a canonical recommendation technique, which predicts interests of a user by aggregating information from similar users or items. In detail, existing CF-based methods [1]}, [2]}, [3]}, [4]} learn latent representations of users and items, by first factorizing the observed interaction matrix, then predicting the potential interests of user-item pairs based on the dot-product of learned embeddings. However, existing CF models rely heavily on negative sampling techniques to discriminate against different items, because negative samples are not naturally available.
i
0af2a972-9d91-4ba2-9a92-76510c9ea576
Nevertheless, the negative sampling techniques suffer from the following limitations. Firstly, they introduce additional computation and memory costs. In existing CF-based methods, the negative sampling algorithm should be carefully designed in order to not select the observed positive user-item pairs. Specifically, to sample one negative user-item pair for a specific user, the algorithm is required to check its conflicts with all the observed positive items interacted by this user. As a result, much computation is needed for users who have a large number of interactions. Secondly, even if non-conflicted negative samples are selected for a user, the samples may fall into the future positive items of the user. The reason is that the unobserved user-item pairs can be either true negative instances (i.e., the user is not interested in interacting with these items) or missing values (e.g., positive pairs in the test set) [1]}, [2]}. Although another line of work [3]}, [4]}, [5]} have get rid of negative sampling and take the full unobserved interactions as negative samples, they may still treat a future positive sample as negative.
i
0b183166-5d58-4254-a184-06bd11c2a5e8
Self-supervised learning (SSL) models [1]}, [2]}, [3]}, that are proposed recently, provide us a possible solution to tackle the aforementioned limitations. SSL enables training a model by iteratively updating network parameters without using negative samples. Thus, it presents a way to scale recommender systems into big data scenarios. Research in various domains ranging from Computer Vision (CV) to Natural Language Processing (NLP), has shown that SSL is possible to achieve competitive or even better results than supervised learning [1]}, [5]}, [3]}. The underlying idea is to maximize the similarity of representations obtained from different distorted versions of a sample using a variant of Siamese networks [7]}. Siamese networks usually include two symmetric networks (i.e., online network and target network) for inputs comparing. The problem with only positive sampling in model training is that, the Siamese networks collapse to a trivial constant solution [2]}. Thus, in recent work, BYOL [1]} and SIMSIAM [2]} introduce asymmetry to the network architecture by adding parameter update technique. Specifically, in the network architecture, an additional “predictor" network is stacked onto the online encoder. For parameter update, a special “stop gradient" operation is highlighted to prevent solution collapsing. The difference between BYOL and SIMSIAM is that SIMSIAM does not need an additional momentum encoder, and its target network shares parameters with the online network. We will illustrate the architectures in detail (see Fig. REF ) in the related work section.
i
ac698618-7ee9-444a-bdc7-0a1bb97e4841
To the best of our knowledge, BUIR [1]} is the only framework for CF to learn user and item latent representations without negative samples. BUIR is derived from BYOL [2]}. Similar to BYOL, BUIR employs two distinct encoder networks (i.e., online and target networks) to address the recurring of trivial constant solutions in SSL. In BUIR, the parameters of the online network are optimized towards that of the target network. At the same time, parameters of the target network are updated based on momentum-based moving average [3]}, [2]}, [5]} to slowly approximate the online network [1]}. As BUIR is built upon BYOL, which stems from vision domain, its architecture is redundant and difficult to fuse information from high-order neighbors for efficient recommendation, because of the design of the momentum-based parameter updating. The SIMSIAM network is originally proposed in vision domain as well. The input is an image, and techniques for data augmentation on images are relatively mature [7]}, such as random cropping, resizing, horizontal flipping, color jittering, converting to grayscale, Gaussian blurring, and solarization. As for a pair of user id and item id that is observed in implicit feedback, there is no standard solution on how to distort it while keep its representation invariant.
i
6867fcd6-d7f4-4b51-951e-be30f0b56df4
In this paper, we propose a self-supervised collaborative filtering framework, which performs posterior perturbation on user and item output embeddings, to obtain a contrastive pair. On architecture design, our framework uses only one encoder that is shared by the online network and the target network. This design makes our framework fundamentally different from that of SIMSIAM and BYOL. In addition, we do not rely on a momentum encoder to update the target network. Instead, we use three posterior embedding perturbation techniques (that are analogous to input data augmentation of an image) to generate different but invariant views from the two networks. An additional benefit of posterior embedding perturbation is that the framework can take the internal implementation of the encapsulated backbones as black-box. Conversely, BUIR adds momentum-based parameter updating to the backbones in order to generate different views. Our experiments on three real-world datasets validate that the proposed SSL framework is able to learn informative representation solely based on positive user-item pairs. In our experiments, we encapsulate two popular CF-based models into the framework, and the results on Top-\(K\) item recommendation are competitive or even better than their supervised counterparts.
i
0595bc8f-e9e7-4c59-9635-6e9bd9fe31f6
We propose a novel framework, SELFCF, that learns latent representations of users/items solely based on positively observed interactions. The framework uses posterior output perturbation to generate different augmented views of the same user/item embeddings for contrastive learning. We design three output perturbation techniques: historical embedding, embedding dropout, and edge pruning to distort the output of the backbone. The techniques are applicable to all existing CF-based models as long as their outputs are embedding-like. We investigate the underlying mechanisms of the framework by performing ablation study on each component. We find the presentations of user/item can be learnt even without the “stop gradient” operator, which shows different behaviors from previous SSL frameworks (e.g., BYOL [1]} and SIMSIAM [2]}). Finally, we conduct experiments on three public datasets by encapsulating two popular backbones. Results show SELFCF is competitive or better than their supervised counterpart and outperforms existing SSL framework by up to 8.93% on average.
i
9c813045-c852-499a-8686-51bd89e2a521
We evaluate the framework on three publicly available datasets and compare its performance with BUIR [1]} by encapsulating two popular CF-based baselines. The CF baselines serve as a supervised counterpart compared with our self-supervised framework. All baselines as well as the frameworks are trained on a single GeForce RTX 2080 Ti (11 GB).
m
8da1b0e4-0212-4eb7-a127-be944c29f172
RQ1: Whether the self-supervised baselines that only leverage positive user-item interactions can outperform their supervised counterparts? RQ2: How SELFCF shapes the recommendation results for cold-start and loyal users? RQ3: Why SELFCF works, and which component is essential in preventing collapsing?
m
f94ba4f7-b029-40b8-80e4-fe690bac67ae
We address the first research question by evaluating our framework against supervised baselines with 3 datasets under 4 evaluation metrics. Next, we dive into the recommendation results of the baselines under both supervised and self-supervised settings and analyze their performance on users with different number of interactions. Finally, to investigate the underlying mechanisms of SELFCF, we perform ablation study on the components of SELFCF, such as the linear predictor, the loss function etc. <TABLE>
m
1a5c5605-7269-4bc9-9e94-f02d9a341a8d
In this paper, we propose a framework on top of Siamese networks to learn representation of users and items without negative samples or labels. We argue the self-supervised learning techniques that widely used in vision cannot be directly applied in recommendation domain. Hence we design a Siamese network architecture that perturbs the output of backbone instead of augmenting the input data. By encapsulating two popular recommendation models into the framework, our experiments on three datasets show the proposed framework is on par or better than other self-supervised framework, BUIR. The performance is also competitive to the supervised counterpart, especially on high sparse data. We hope our study will shed light on further researches on self-supervised collaborative filtering.
d
8b9c83d5-6555-4db4-8e64-e1bbfcc98012
This work is motivated by the biological challenge of spaceflight and the effects of cosmic radiation on human health. Cosmic radiation is able to penetrate thick layers of shielding and body tissue and its carcinogenic nature is a major cause for concern for long-distance space travel [1]}. Understanding the exact mechanisms of radiation damage could give us insight into how to mitigate its effects on cells and tissues [2]}. However, studying these mechanisms is difficult because human space data is incredibly expensive and difficult to generate [3]}. As a result, studies on human data typically do not have the statistical power required to identify risk factors - particularly in high-dimensional genomics and transcriptomics datasets [4]}.
i
90ff2fb4-3077-4d89-878d-f2a08ab6ef0e
In-vitro experiments on human cells are an alternative data source, but these lack the etiological validity of a complete organism [1]} - especially given how radiation interacts with the different tissues it is penetrating [2]}. A second alternative is animal models, which can be exposed to radiation that mimics a cosmic profile. While this data is much easier to generate, there is a poor record of experiments conducted in animal models translating into useful human findings due to the obvious dissimilarities between organisms [3]}. This abundance of animal model data but a dearth of human data is by no means unique to cosmic radiation experiments: it is an issue encountered routinely for conditions that are hard to experimentally study in humans without putting them at risk[4]}, [5]}.
i
4a06aacf-a0a2-4c54-8571-da218c40bc94
A causal framework has been proposed in the literature based on identifying invariance across different data-generating environments, leading to advances in domain generalization [1]}, [2]}, [3]}The choice of IRM in this work has an exploratory purpose; these results suggest that other causal methods would also work.. Environments are defined as subsets of available data that do not share the same underlying populations, but for which the causal relationships to a target variable are assumed invariant, e.g. different hospitals might capture data about a disease but with varying populations of underlying health characteristics.
i
bce2aa4b-68af-436d-b65d-54c1a047761e
Our contributions include demonstrating the effectiveness of this framework for identifying invariant relationships present in cross-organism datasets. Further, we provide two open source cross-organism datasets to the community for further research (with matched gene-homologues): one based on acute gamma radiation experiments of humans and mouse data, and the second on chronic heavy ion radiation experiments. Our results provide valuable insights into a set of human-relevant health variables in a setting where there is a deficit of human data but an abundance of model organism data.
i
3e5c937e-45e3-4efa-9f8e-61c4d9574a68
We set up our experiment as a binary classification task and trained an IRM model to classify irradiated samples from non-irradiated controls. We consider two types of experiments (augmentation and substitution) for each combined biological dataset presented in Section . In Augmentation Experiments we investigate varying the amount of mouse samples available in the cross-organism dataset. Mouse samples were added to the original human samples in small increments until all mouse samples were combined with the human samples. In Substitution Experiments we substituted human data samples with mouse samples on an incremental basis. That is, we begin with all human data and two mouse samples per environment, and incrementally replace the human samples with mouse samples until only two human samples per environment remain.
m
b8e83f4e-a34d-4c87-8d39-449c716c2763
For each experiment, the output is a ranked list of features (genes), where the order is defined by coefficients extrapolated from the model \(\Phi \) in Equation ; the more a feature is consistently predictive of the target variable across multiple environments, the higher the corresponding coefficient will be. These invariant features are then considered candidates for causal features. In each experiment, we are interested to evaluate the change in the reported highest ranked features as the mouse-human ratio changes. Thus, we compare the gene rankings that IRM returns for each combination of mouse-human data in a pairwise fashion. Specifically, we consider how many of the top-10, or top-50 ranked genes overlap between each mixed-organism experiment. We also consider two similarity metrics common in the literature: the Ranked-Biased overlap [1]} of the ranked features and their Kendal-tau similarity metric [2]}. <FIGURE>
m
a366e951-3664-4adb-b718-14d3e8ce1677
We demonstrate a novel paradigm for generating human-relevant biomedical insights from observational datasets with limited size by leveraging and augmenting with animal model data. Experiments are presented in radiation exposure and carcinogenics, with a key novel contribution being the identification of SLC8A3 as a potentially causal feature. This paradigm also has utility across many biomedical settings, e.g. when combining data from in-vivo experiments in living whole organisms with in-vitro experiments of cell and tissue cultures.
d