Datasets:
Tasks:
Text Classification
Modalities:
Text
Sub-tasks:
multi-class-classification
Languages:
English
Size:
10K - 100K
ArXiv:
License:
Paper title,Paper link,Impact statement,ID | |
Model Agnostic Multilevel Explanations,https://proceedings.neurips.cc/paper/2020/file/426f990b332ef8193a61cc90516c1245-Paper.pdf,"With the proliferation of deep learning, explaining or understanding the reasons behind the models decisions has become extremely important in many critical applications [ 30]. Many explainability methods have been proposed in literature [7, 12, 11], however, they either provide instance specific local explanations or fit to the entire dataset and create global explanations. Our proposed method is able to create both such explanations, but in addition, it also creates explanations for subgroups in the data and all of this jointly. We thus are creating explanations for granularities (between local and global). This multilevel aspect has not been sufficiently researched before. In fact recently [4] has stressed the importance of having such multilevel explanations for successfully meeting the requirements of Europe’s General Data Protection Regulation (GDPR) [5]. They clearly state that simply having local or global explanations may not be sufficient for providing satisfactory explanations in many cases. There are also potential risks with this approach. The first is that if the base local explainer is non-robust or inaccurate [34, 35] then the explanations generated by our tree also may have to be considered cautiously. However this is not specific to our method, and applies to several post-hoc explainability methods that try to explain a black-box model. The way to mitigate this is to ensure that the local explanation methods are adapted (such as by choosing appropriate neighborhoods in LIME) to provide robust and accurate explanations. Another risk could be that such detailed multilevel explanations may reveal too much about the internals of the model (similar scenario for gradient-based models is discussed in [36]) and hence may raise privacy concerns. Mitigation could happen by selectively revealing the levels / pruning the tree or having a budget of explanations for each user to balance the level of explanations vs. the exposure of the black-box model.",50 | |
Pipeline PSRO: A Scalable Approach for Finding Approximate Nash Equilibria in Large Games,https://proceedings.neurips.cc/paper/2020/file/e9bcd1b063077573285ae1a41025f5dc-Paper.pdf,"Stratego and Barrage Stratego are very large imperfect information board games played by many around the world. Although variants of self-play reinforcement learning have achieved grandmaster level performance on video games, it is unclear if these algorithms could work on Barrage Stratego or Stratego because they are not principled and fail on smaller games. We believe that P2SRO will be able to achieve increasingly good performance on Barrage Stratego and Stratego as more time and compute are added to the algorithm. We are currently training P2SRO on Barrage Stratego and we hope that the research community will also take interest in beating top humans at these games as a challenge and inspiration for artificial intelligence research. This research focuses on how to scale up algorithms for computing approximate Nash equilibria in large games. These methods are very compute-intensive when applied to large games. Naturally, this favors large tech companies or governments with enough resources to apply this method for large, complex domains, including in real-life scenarios such as stock trading and e-commerce. It is hard to predict who might be put at an advantage or disadvantage as a result of this research, and it could be argued that powerful entities would gain by reducing their exploitability. However, the same players already do and will continue to benefit from information and computation gaps by exploiting suboptimal behavior of disadvantaged parties. It is our belief that, in the long run, preventing exploitability and striving as much as practical towards a provably efficient equilibrium can serve to level the field, protect the disadvantaged, and promote equity and fairness.",51 | |
POLY-HOOT : Monte-Carlo Planning in Continuous Space MDPs with Non-Asymptotic Analysis,https://proceedings.neurips.cc/paper/2020/file/30de24287a6d8f07b37c716ad51623a7-Paper.pdf,"We believe that researchers of planning, reinforcement learning, and multi-armed bandits, especially those who are interested in the theoretical foundations, would benefit from this work. In particular, prior to this work, though intuitive, easy-to-implement, and empirically widely-used, a theoretical analysis of Monte-Carlo tree search (MCTS) in continuous domains had not been established through the lens of non-stationary bandits. In this work, inspired by the recent advances in finite-space Monte-Carlo tree search, we have provided such a result, and thus theoretically justified the efficiency of MCTS in continuous domains. Although Monte-Carlo tree search has demonstrated great performance in a wide range of applications, theoretical explanation of its empirical successes is relatively lacking. Our theoretical results have advocated the use of non-stationary bandit algorithms, which might guide the design of new planning algorithms that enjoy better empirical performance in practice. Our results might also be helpful for researchers interested in robotics and control applications, as our algorithm can be readily applied to such planning problems with continuous domains. As a theory-oriented work, we do not believe that our research will cause any ethical issue, or put anyone at any disadvantage.",52 | |
Sliding Window Algorithms for k-Clustering Problems,https://proceedings.neurips.cc/paper/2020/file/631e9c01c190fc1515b9fe3865abbb15-Paper.pdf,"Clustering is a fundamental unsupervised machine learning problem that lies at the core of multiple real-world applications. In this paper, we address the problem of clustering in a sliding window setting. As we argued in the introduction, the sliding window model allows us to discard old data which is a core principle in data retention policies. Whenever a clustering algorithm is used on user data it is important to consider the impact it may have on the users. In this work we focus on the algorithmic aspects of the problem and we do not address other considerations of using clustering that may be needed in practical settings. For instance, there is a burgeoning literature on fairness considerations in unsupervised methods, including clustering, which further delves into these issues. We refer to this literature [22, 40, 11] for addressing such issues.",53 | |
Optimal Approximation - Smoothness Tradeoffs for Soft-Max Functions,https://proceedings.neurips.cc/paper/2020/file/1bd413de70f32142f4a33a94134c5690-Paper.pdf,"In this paper, we study some basic mathematical properties of soft-max functions and we propose new ones that are optimal with respect to some mathematical criteria. Soft-max functions are fundamental building blocks with many applications, from Machine Learning to Differential Privacy to Resource Allocation. All of these fields have societal impact: Differential Privacy has already been a fundamental mathematical tool to ensure privacy in the digital world and in many cases it has been the only available method to get privacy from services that take place in the digital world. Resource allocation and in particular auction theory has also societal impact, from the way that items are sold in online platforms to the way that ride-sharing applications decide prices, to the nation wide auctions that allocate bandwidth of the frequency spectrum to broadcast companies. Since our paper contributes to one of the fundamental tools in all these areas we believe that it can potentially have a positive impact by improving the outcomes of many algorithms in these topics in terms of e.g. privacy in Differential Privacy applications and revenue in Resource Allocation applications. Although our paper is mostly mathematical in nature, we present also some experimental results applied to data collect from the DBLP dataset. Although we used a public data set, we acknowledge that the data we used may be biased.",54 | |
Learning with Operator-valued Kernels in Reproducing Kernel Krein Spaces,https://proceedings.neurips.cc/paper/2020/file/9f319422ca17b1082ea49820353f14ab-Paper.pdf,"The theoretical tools introduced in the paper for generalized operator valued kernels and function- valued Reproducing Kernel Krein Spaces (RKKS) are new and will promote research in investigating more sophisticated techniques for handling function data and other data with complicated structures. The proposed methods and algorithms have been applied on a speech inversion problem and accurate predictions of function-valued outputs in such applications might be useful for improving the current understanding of the speech generation process in humans. To the best of our knowledge, our work does not have any negative impact.",55 | |
Task-Agnostic Amortized Inference of Gaussian Process Hyperparameters,https://proceedings.neurips.cc/paper/2020/file/f52db9f7c0ae7017ee41f63c2a7353bc-Paper.pdf,"Iterative hyperparameter optimization procedures often place a heavy computation burden on people who apply Gaussian processes to real-world applications. The optimization procedure itself usually has hyperparameters to be tuned (learning rate, number of iterations, etc.), which further increases the computational cost. Our proposed method amortizes this cost by training a single meta-model that is then useful across a wide range of tasks. Once the meta-model is trained, it can be repeatedly applied to future kernel hyperparameter selection tasks, reducing resource usage and carbon footprint. The minimal computation required by our method also makes it more accessible to the general public instead of only to those with abundant computing resources. Like most deep learning models, our neural model has the potential risk of overfitting and low robustness. In an effort to avoid this, we use only synthetic data generated from a family of kernel function space that is expressive enough to cover a variety of Gaussian process use cases. Our goal is to avoid biasing the model towards any particular task. Additionally, we impose regularizations such as permutation invariance and weight sharing to encourage generalizable representations. Even with all these efforts, our model might still produce misspecified hyperparameters which can lead to poor prediction performance versus conventional MLL optimization procedures.",56 | |
A Self-Tuning Actor-Critic Algorithm,https://proceedings.neurips.cc/paper/2020/file/f02208a057804ee16ac72ff4d3cec53b-Paper.pdf,"The last decade has seen significant improvements in Deep Reinforcement Learning algorithms. To make these algorithms more general, it became a common practice in the DRL community to measure the performance of a single DRL algorithm by evaluating it in a diverse set of environments, where at the same time, it must use a single set of hyperparameters. That way, it is less likely to overfit the agent’s hyperparameters to specific domains, and more general properties can be discovered. These principles are reflected in popular DRL benchmarks like the ALE and the DM control suite. In this paper, we focus on exactly that goal and design a self-tuning RL agent that performs well across a diverse set of environments. Our agent starts with a global loss function that is shared across the environments in each benchmark. But then, it has the flexibility to self-tune this loss function, separately in each domain. Moreover, it can adapt its loss function within a single lifetime to account for inherent non-stationarities in RL algorithms - exploration vs. exploitation, changing data distribution, and degree of off-policy. While using meta-learning to tune hyperparameters is not new, we believe that we have made significant progress that will convince many people in the DRL community to use metagradients. We demonstrated that our agent performs significantly better than the baseline algorithm in four benchmarks. The relative improvement is much more significant than in previous metagradient papers and is demonstrated across a wider range of environments. While each of these benchmarks is diverse on its own, together, they give even more significant evidence to our approach’s generality. Furthermore, we show that it’s possible to self-tune tenfold more metaparameters from different types. We also showed that we gain improvement from self-tuning various subsets of the meta parameters, and that performance kept improving as we self-tuned more metaparameters. Finally, we have demonstrated how embracing self-tuning can help to introduce new concepts (leaky V-trace and parameterized auxiliary tasks) to RL algorithms without needing tuning.",57 | |
Inverse Reinforcement Learning from a Gradient-based Learner,https://proceedings.neurips.cc/paper/2020/file/19aa6c6fb4ba9fcf39e893ff1fd5b5bd-Paper.pdf,"In this paper, we focus on the Inverse Reinforcement Learning [2, 15, 3, 21] task from a Learning Agent [17]. The first motivation to study Inverse Reinforcement Learning algorithms is to overcome the difficulties that can arise in specifying the reward function from human and animal behaviour. Sometimes, in fact, it is easier to infer human intentions by observing their behaviours than to design a reward function by hand. An example is helicopter flight control [1], in which we can observe a helicopter operator and through IRL a reward function is inferred to teach a physical remote-controlled helicopter. Another example is to predict the behavior of a real agent as route prediction tasks of taxis [41, 42] or anticipation of pedestrian interactions [12] or energy-efficient driving [38]. However, in many cases, the agents are not really experts and on the other hand, only expert demonstrations can not show their intention to avoid dangerous situations. We want to point out that learning what the agent wants to avoid because harmful is as important as learning his intentions. The possible outcomes of this research are the same as those of Inverse Reinforcement Learning mentioned above, avoiding the constraint that the agent has to be an expert. In future work, we will study how to apply the proposed algorithm in order to infer the pilot’s intentions when they learn a new circuit. A relevant possible complication of using IRL is the error on the reward feature engineering which can lead to errors in understanding the agent’s intentions. In an application such as autonomous driving, errors in the reward function can cause dangerous situations. For this reason, verification through the simulated environment of the effectiveness of the retrieve rewards is quite important.",58 | |
Fourier Features Let Networks Learn High Frequency Functions in Low Dimensional Domains,https://proceedings.neurips.cc/paper/2020/file/55053683268957697aa39fba6f231c68-Paper.pdf,"This paper demonstrates how Fourier features can be used to enable coordinate-based MLPs to accurately model high-frequency functions in low-dimensional domains. Because we improve the performance of coordinate-based MLPs, we consider the impact of using those MLPs (such as those shown in Figure 1). The 2D image regression case (Figure 1b) has historically been limited in its practical use to artistic image synthesis [15, 39] and synthesizing images designed to mislead classifiers [31]. It is difficult to quantify the potential societal impact of artistic image synthesis, but the ability to generate improved adversarial attacks on classifiers poses a risk in domains such as robotics or self-driving cars, and necessitates the continued study of how classifiers can be made robust to such attacks [26]. Given that coordinate-based MLPs have been shown to exhibit notable compression capabilities [30], advances in coordinate-based MLPs for image or video regression may also serve as a basis for an effective compression algorithm. Improved image compression may have positive value in terms of consumer photography and electronics experiences (expanded on-device or cloud storage), but may have potentially negative value by enabling easier private or governmental surveillance by making recordings easier to store for longer periods of time. Improved performance of coordinate-based MLPs for CT and MRI imaging tasks (Figure 1d) may lead to improved medical imaging technologies, which generally have positive societal impacts: more accurate diagnoses and less expensive or more accessible diagnostic information for communities with limited access to medical services. However, given the serious impact an inaccurate medical diagnosis can have on a patient’s well-being, the consequences of failure for this use case are significant. Coordinate-based MLPs have also been used for 3D tasks such as predicting volume occupancy (Figure 1c) and view synthesis (Figure 1e) [27, 30]. Assessing the long-term impact of algorithms that reason about 3D occupancy is difficult, as this task is fundamental to much of perception and robotics and thus carries with it the broad potential upsides and downsides of increased automation [10]. But in the immediate future, improved view synthesis has salient positive effects: it may let filmmakers produce photorealistic effects and may allow for immersive 3D mapping applications or VR experiences. However, this progress may also inadvertently reduce employment opportunities for the human artists who currently produce these effects manually.",59 | |
Regret in Online Recommendation Systems,https://proceedings.neurips.cc/paper/2020/file/f1daf122cde863010844459363cd31db-Paper.pdf,"This work, although mostly theoretical, may provide guidelines and insights towards an improved design of recommendation systems. The benefits of such improved design could be to increase user experience with these systems, and to help companies to improve their sales strategies through differentiated recommendations. The massive use of recommendation systems and its potential side effects have recently triggered a lot of interest. We must remain aware of and investigate such effects. These include: opinion polarization, a potential negative impact on users’ behavior and their willingness to pay, privacy issues.",60 | |
Robust Correction of Sampling Bias using Cumulative Distribution Functions,https://proceedings.neurips.cc/paper/2020/file/24368c745de15b3d2d6279667debcba3-Paper.pdf,"Machine learning is limited by the availability and quality of data. In many circumstances we may not have access to labeled data from our target distribution. Improving methods for covariate shift will help us extend the impact of the data we do have. Stably predicting a conditional probability distribution has an application that has recently gained notorious prominence. In the presence of a pandemic, one may wish to predict the death-rate to calculate the expected toll on society. This translates exactly to predicting the conditional probability of dying given demographic information. Data that has been gathered often has a large sampling bias, since a persons risk profile may affect their willingness to leave home and participate in a study, and their age may affect their availability for an online survey. A stable method for covariate shift in this situation can be a critical part of ensuring officials have accurate statistics when making decisions. One potential negative impact of the work would be if it were to be misunderstood and misused, yielding incorrect results. This is a concern in all areas of data science, so it is important that the conditions for appropriate use be well understood.",61 | |
End-to-End Learning and Intervention in Games,https://proceedings.neurips.cc/paper/2020/file/c21f4ce780c5c9d774f79841b81fdc6d-Paper.pdf,"Our work helps understand and resolve social dilemmas resulting from pervasive conflict between self- and collective interest in human societies. The potential applications of the proposed modeling framework range from addressing externality in economic systems to guiding large-scale infrastructure investment. Planners, regulators, policy makers of various human systems could benefit from the decision making tools derived from this work.",62 | |
Stage-wise Conservative Linear Bandits,https://proceedings.neurips.cc/paper/2020/file/804741413d7fe0e515b19a7ffc7b3027-Paper.pdf,"The main goal of this paper is to design and study novel “safe” learning algorithms for safety-critical systems with provable performance guarantees. An example arises in clinical trials where the effect of different therapies on patient’s health is not known in advance. We select the baseline actions to be the therapies that have been historically chosen by medical practitioners, and the reward captures the effectiveness of the chosen therapy. The stage-wise conservative constraint modeled in this paper ensures that at each round the learner should choose a therapy which results in an expected reward if not better, must be close to the baseline policy. Another example arises in societal-scale infrastructure networks such as communication/power/transportation/data network infrastructure. We focus on the case where the reliability requirements of network operation at each round depends on the reward of the selected action and certain baseline actions are known to not violate system constraints and achieve certain levels of operational efficiency as they have been used widely in the past. In this case, the stage-wise conservative constraint modeled in this paper ensures that at each round, the reward of action employed by learning algorithm if not better, should be close to that of baseline policy in terms of network efficiency, and the reliability requirement for network operation must not be violated by the learner. Another example is in recommender systems that at each round, we wish to avoid recommendations that are extremely disliked by the users. Our proposed stage-wise conservative constrains ensures that at no round would the recommendation system cause severe dissatisfaction for the users (consider perhaps how a really bad personal movie recommendation from a streaming platform would severely affect your view of the said platform).",63 | |
Optimal Learning from Verified Training Data,https://proceedings.neurips.cc/paper/2020/file/6c1e55ec7c43dc51a37472ddcbd756fb-Paper.pdf,"The manipulation and fairness of algorithms form a significant barrier to practical application of theoretically effective machine learning algorithms in many real world use cases. With this work, we have attempted to address the important problem of data manipulation, which has many societal consequences. Data manipulation is one of many ways in which an individual can “game the system"" in order to secure beneficial outcomes for themselves to the detriment of others. Thus, reducing the potential benefits of data manipulation is of worthwhile consideration and focus. Whilst this paper is primarily of theoretical focus, we hope that our work will form a contributing step towards safe, fair, and effective application of machine learning algorithms in more practical settings.",64 | |
Semi-Supervised Partial Label Learning via Confidence-Rated Margin Maximization,https://proceedings.neurips.cc/paper/2020/file/4dea382d82666332fb564f2e711cbc71-Paper.pdf,"In this paper, we study the problem of semi-supervised partial label learning which has been less investigated in weakly supervised learning. The developed techniques can be applied to scenarios where the supervision information collected from the environment is accurate. For ethical use of the proposed approach, one should expect proper acquisition of the candidate labeling information (e.g. crowdsourcing) as well as the unlabeled data. We believe that developing such techniques is important to meet the increasing needs of learning from weak supervision in many real-world applications.",65 | |
Linearly Converging Error Compensated SGD,https://proceedings.neurips.cc/paper/2020/file/ef9280fbc5317f17d480e4d4f61b3751-Paper.pdf,"Our contribution is primarily theoretical. Therefore, a broader impact discussion is not applicable.",66 | |
ICAM: Interpretable Classification via Disentangled Representations and Feature Attribution Mapping,https://proceedings.neurips.cc/paper/2020/file/56f9f88906aebf4ad985aaec7fa01313-Paper.pdf,"There is growing evidence that deep learning tools have the potential to improve the speed of review of medical images and that their sensitivity to complex high-dimensional textures can (in some cases) improve their efficacy relative to radiographers [19]. A recent study by Google DeepMind [31] suggested that deep learning systems could perform the role of a second-reader of breast cancer screenings to improve the precision of diagnosis relative to a single-expert (which is standard clinical practice within the US). For brain disorders the opportunities and challenges for AI are more significant since the features of the disease are commonly subtle, presentations highly variable (creating greater challenges for physicians), and the datasets are much smaller in size in comparison to natural image tasks. The additional pitfalls that are common in deep learning algorithms [46], including the so called ‘black box’ problem where it is unknown why a certain prediction is made, lead to further uncertainly and mistrust for clinicians when making decisions based on the results of these models. We developed a novel framework to address this problem by deriving a disease map, directly from a class prediction space, which highlights all class relevant features in an image. Our objective is to demonstrate on a theoretical level, that the development of more medically interpretable models is feasible, rather than developing a diagnostic tool to be used in the clinic. However, in principle, these types of maps may be used by physicians as an additional source of data in addition to mental exams, physiological tests and their own judgement to support diagnosis of complex conditions such as Alzheimer’s, autism, and schizophrenia. This may have significant societal impact as early diagnosis can improve the effectiveness of interventional treatment. Further, our model, ICAM, presents a specific advantage as it provides a ‘probability’ of belonging to a class along with a visualisation, supporting better understanding of the phenotypic variation of these diseases, which may improve mechanistic or prognostic modelling of these diseases. There remain ethical challenges as errors in prediction could influence clinicians towards wrong diagnoses and incorrect treatment which could have very serious consequences. Further studies have shown clear racial differences in brain structure [41, 44] which if not sampled correctly could lead to bias in the model and greater uncertainty for ethnic minorities [39]. These challenges would need to be addressed before any consideration of clinical translation. Clearly, the uncertainties in the model should be transparently conveyed to any end user, and in this respect the advantages of ICAM relative to its predecessors are plain to see.",67 | |
Learning Representations from Audio-Visual Spatial Alignment,https://proceedings.neurips.cc/paper/2020/file/328e5d4c166bb340b314d457a208dc83-Paper.pdf,"Self-supervision reduces the need for human labeling, which is in some sense less affected by human biases. However, deep learning systems are trained from data. Thus, even self-supervised models reflect the biases in the collection process. To mitigate collection biases, we searched for 360 ◦ videos using queries translated into multiple languages. Despite these efforts, the adoption of 360 ◦ video cameras is likely not equal across different sectors of society, and thus learned representations may still reflect such discrepancies.",68 | |
Sample complexity and effective dimension for regression on manifolds,https://proceedings.neurips.cc/paper/2020/file/977f8b33d303564416bf9f4ab1c39720-Paper.pdf,"The results in this paper further illuminate the role of low-dimensional structure in machine learning algorithms. An improved theoretical understanding of the performance of these algorithms is increasingly important as tools from machine learning become ever-more-widely adopted in a range of applications with significant societal implications. Although, in general, there are well-known ethical issues that can arise from inherent biases in the way data are sampled and presented to regression and classification algorithms, we do not have reason to believe that the methods presented in this paper would either enhance or diminish these issues. Our analysis is abstract and, for better or for worse, assumes a completely neutral sampling model (uniform over a manifold).",69 | |
Curriculum by Smoothing,https://proceedings.neurips.cc/paper/2020/file/f6a673f09493afcd8b129a0bcf1cd5bc-Paper.pdf,"In this paper we describe a technique to fundamentally improve training for CNNs. This paper has impact wherever CNNs are used, since they can also be trained using the same regime, which would results in improved task-performance. Applications of CNNs, such as object recognition, can be used for good or malicious purposes. Any user or practitioner has the ultimate impact and authority on how to deploy such a network in practice. The user can use our proposed strategy to improve their underlying machine learning algorithm, and deploy it in whichever way they choose.",70 | |
No-regret Learning in Price Competitions under Consumer Reference Effects,https://proceedings.neurips.cc/paper/2020/file/f51238cd02c93b89d8fbee5667d077fc-Paper.pdf,"This work sheds light on market stability concerning competition among multiple firms under consumers’ reference effects. As discussed at the very beginning of the introduction, market stability is an important feature for business organizations and entities who are interacting in complex and highly dynamic environments. In a stable market, firms can better understand market behavior to guide their long-term decision making. This work shows that firms can obtain the desired market stability condition by running simple off-the-shelf online algorithms such as OMD. These algorithms do not require a large amount of information about market characteristics and perform very well in a dynamic competitive environments. In many e-commerce and online retail platforms, automated learning and pricing algorithms are prevalent. Thus, we believe that our paper provides firms with a simple automated solution for complex dynamic pricing decisions, which may potentially lead to stable markets.",71 | |
Optimal Epoch Stochastic Gradient Descent Ascent Methods for Min-Max Optimization,https://proceedings.neurips.cc/paper/2020/file/3f8b2a81da929223ae025fcec26dde0d-Paper.pdf,A discussion about broader impact is not applicable since our work is very theoretical and currently has no particular application.,72 | |
Influence-Augmented Online Planning for Complex Environments,https://proceedings.neurips.cc/paper/2020/file/2e6d9c6052e99fcdfa61d9b9da273ca2-Paper.pdf,"The potential impact of this work is precisely its motivation: making online planning more useful in real-world decision making scenarios, enabling more daily decisions to be made autonomously and intelligently, with promising applications including autonomous warehouse and traffic light control. Unlike simulators constructed by domain experts, which are in general easier to test and debug, influence-augmented local simulator contains an approximate influence predictor learned from data, which may fail with rare inputs and result in catastrophic consequences especially when controlling critical systems.",73 | |
Deep Wiener Deconvolution: Wiener Meets Deep Learning for Image Deblurring,https://proceedings.neurips.cc/paper/2020/file/0b8aff0438617c055eb55f0ba5d226fa-Paper.pdf,"Since blur is a common artifact in imaging systems, such as from the point spread function of the optical system, image deblurring has a broad potential impact through a wide range of applications. These include satellite imaging, medical imaging, telescope imaging in astronomy, and portable device imaging . Our image deblurring technique based on the proposed deep Wiener deconvolution network can provide high-quality clear images to facilitate intelligent data analysis tasks in these fields and it is apparent that applications, e.g., in medical imaging or portable device imaging have significant societal impact. To illustrate its applicability, we provide some examples for potential applications of our approach in the supplemental material. Despite the many benefits of high-quality image deblurring, negative consequences can still arise, largely because image deblurring can present certain risks to privacy . For example, in order to protect the privacy of certain individuals depicted in visual media, such as on TV or in the press, their depiction will sometimes be blurred artificially to hide the individual’s identity. In this case, deblurring can pose the risk of unhiding the person’s identity, thus damaging his/her privacy. Furthermore, it is important to be cautious of the results of any deblurring system as failures could cause misjudgment. For example, the inaccurate restoration of numbers and letters can produce misleading information. Our proposed approach is robust to various noise levels and inaccurate kernels, which intuitively improves its adaptability to more complex scenes and thus minimizes the chance of such failures. Nevertheless, misjudgment based on incorrect restoration cannot be ruled out completely.",74 | |
Regularizing Black-box Models for Improved Interpretability,https://proceedings.neurips.cc/paper/2020/file/770f8e448d07586afbf77bb59f698587-Paper.pdf,"Our user study plan has been approved by the IRB to minimize any potential risk to the participants, and the datasets used in this work are unlikely to contain sensitive information because they are public and well-studied. Within the Machine Learning community, we hope that E XP O will help encourage Interpretable Machine Learning research to adopt a more quantitative approach, both in the form of proxy evaluations and user studies. For broader societal impact, the increased interpretability of models trained with E XP O should be a significant benefit. However, E XP O does not address some issues with local explanations such as their susceptibility to adversarial attack or their potential to artificially inflate people’s trust in the model.",75 | |
Agnostic Learning of a Single Neuron with Gradient Descent,https://proceedings.neurips.cc/paper/2020/file/3a37abdeefe1dab1b30f7c5c7e581b93-Paper.pdf,"This paper provides a theoretical analysis of gradient descent when used for learning a single neuron. As a theoretical work, its potential risk for negative societal impacts is extremely limited. On the other hand, our general lack of understanding of why gradient descent on large neural networks can find weights that have both small empirical risk and also small population risk is worrisome given the widespread adoption of large neural networks in sensitive technology applications. (A reasonable expectation for using a piece of technology is that we understand how and why it works.) Our work helps explain how, in a simple neural network model, gradient descent can learn solutions that generalize well even though the optimization problem is highly nonconvex and nonsmooth. As such, it provides a building block for understanding how more complex neural network models can be learned by gradient descent.",76 | |
MetaPerturb: Transferable Regularizer for Heterogeneous Tasks and Architectures,https://proceedings.neurips.cc/paper/2020/file/84ddfb34126fc3a48ee38d7044e87276-Paper.pdf,"Our MetaPerturb regularizer effectively eliminates the need for retraining of the source task because it can generalize to any convolutional neural architectures and to any image datasets. This versatility is extremely helpful for lowering the energy consumption and training time required in transfer learning, because in real world there exists extremely diverse learning scenarios that we have to deal with. Previous transfer learning or meta-learning methods have not been flexible and versatile enough to solve those diverse large-scale problems simultaneously, but our model can efficiently improve the performance with a single meta-learned regularizer. Also, MetaPerturb efficiently extends the previous meta-learning to standard learning frameworks by avoiding the expensive bilevel optimization, which reduces the computational cost of meta-training, which will result in further reduction in the energy consumption and training time.",77 | |
A Catalyst Framework for Minimax Optimization,https://proceedings.neurips.cc/paper/2020/file/3db54f5573cd617a0112d35dd1e6b1ef-Paper.pdf,"Our work provides a family of simple and efficient algorithms for some classes of minimax optimization. We believe our theoretical results advance many applications in ML which requires minimax optimization. Of particular interests are deep learning and fair machine learning. Deep learning is used in many safety-critical environments, including self-driving car, biometric authentication, and so on. There is growing evidence that shows deep neural networks are vulnerable to adversarial attacks. Since adversarial attacks and defenses are often considered as two-player games, progress in minimax optimization will definitely empower both. Furthermore, minimax optimization problems provide insights and understanding into the balance and equilibrium between attacks and defenses. As a consequence, making good use of those techniques will boost the robustness of deep learning models and strengthen the security of its applications. Fairness in machine learning has attracted much attention, because it is directly relevant to policy design and social welfare. For example, courts use COMPAS for recidivism prediction. Researchers have shown that bias is introduced into many machine learning systems through skewed data, limited features, etc. One approach to mitigate this is adding constraints into the system, which naturally gives rise to minimax problems.",78 | |
Contextual Reserve Price Optimization in Auctions via Mixed-Integer Programming,https://proceedings.neurips.cc/paper/2020/file/0e1bacf07b14673fcdb553da51b999a5-Paper.pdf,"This work presents new methods, and as such does not have direct societal impact. However, if the context provided allows the model to reason about protected classes or sensitive information, either directly or indirectly, the model–and, therefore, the application of this work–has the potential for adverse effects.",79 | |
DynaBERT: Dynamic BERT with Adaptive Width and Depth,https://proceedings.neurips.cc/paper/2020/file/6f5216f8d89b086c18298e043bfe48ed-Paper.pdf,"Traditional machine learning computing relies on mobile perception and cloud computing. However, considering the speed, reliability, and cost of the data transmission process, cloud-based machine learning may cause delays in inference, user privacy leakage, and high data transmission costs. In such cases, in addition to end-cloud collaborative computing, it becomes increasingly important to run deep neural network models directly on edge. Recently, pre-trained language models like BERT have achieved impressive results in various natural language processing tasks. However, the BERT model contains tons of parameters, hindering its deployment to devices with limited resources. The difficulty of deploying BERT to these devices lies in two aspects. Firstly, the performances of various devices are different, and it is unclear how to deploy a BERT model suitable for each edge device based on its resource constraint. Secondly, the resource condition of the same device under different circumstances can be quite different. Once the BERT model is deployed to a specific device, dynamically selecting a part of the model for inference based on the device’s current resource condition is also desirable. Motivated by this, we propose DynaBERT. Instead of compressing the BERT model to a fixed size like existing BERT compression methods, the proposed DynaBERT can adjust its size and latency by selecting a sub-network with adaptive width and depth. By allowing both adaptive width and depth, the proposed DynaBERT also enables a large number of architectural configurations of the BERT model. Moreover, once the DynaBERT is trained, no further fine-tuning is required for each sub-network, and the benefits are threefold. Firstly, we only need to train one DynaBERT model, but can deploy different sub-networks to different hardware platforms based on their performances. Secondly, once one sub-network is deployed to a specific device, this device can select the same or smaller sub-networks for inference based on its dynamic efficiency constraints. Thirdly, different sub-networks sharing weights in one single model dramatically reduces the training and inference cost, compared to using different-sized models separately for different hardware platforms. This can reduce carbon emissions, and is thus more environmentally friendly. Though not originally developed for compression, sub-networks of the proposed DynaBERT outperform other BERT compression methods under the same efficiency constraints like #parameters, FLOPs, GPU and CPU latency. Besides, the proposed DynaBERT at its largest size often achieves better performances as BERT BASE with the same size. A possible reason is that allowing adaptive width and depth increases the training difficulty and acts as regularization, and so contributes posi- tively to the performance. In this way, the proposed training method of DynaBERT also acts as a regularization method that can boost the generalization performance. Meanwhile, we also find that the compressed sub-networks of the learned DynaBERT have good interpretability. In order to maintain the representation power, the attention patterns of sub-networks with smaller width or depth of the trained DynaBERT exhibit function fusion, compared to the full-sized model. Interestingly, these attention patterns even explain the enhanced performance of DynaBERT on some tasks, e.g., enhanced ability of distinguishing linguistic acceptable and non-acceptable sentences for CoLA . Besides the positive broader impacts above, since DynaBERT enables easier deployment of BERT, it also makes the negative impacts of BERT more severe. For instance, application in dialogue systems replaces help-desks and can cause job loss. Extending our method to generative models like GPT also faces the risk of generating offensive, biased or unethical outputs.",80 | |
CoinDICE: Off-Policy Confidence Interval Estimation,https://proceedings.neurips.cc/paper/2020/file/6aaba9a124857622930ca4e50f5afed2-Paper.pdf,"This research is fundamental and targets a broad question in reinforcement learning. The ability to reliably assess uncertainty in off-policy evaluation would have significant positive benefits for safety- critical applications of reinforcement learning. Inaccurate uncertainty estimates create the danger of misleading decision makers and could lead to detrimental consequences. However, our primary goal is to improve these estimators and reduce the ultimate risk of deploying reinforcement-learned systems. The techniques are general and do not otherwise target any specific application area.",81 | |
A mean-field analysis of two-player zero-sum games,https://proceedings.neurips.cc/paper/2020/file/e97c864e8ac67f7aed5ce53ec28638f5-Paper.pdf,"We study algorithms designed to find equilibria in games, provide theoretical guarantees of convergence and test their performance empirically. Among other applications, our results give insight into training algorithms for generative adversarial networks (GANs), which are useful for many relevant tasks such as image generation, image-to-image or text-to-image translation and video prediction. As always, we note that machine learning improvements like ours come in the form of “building machines to do X better”. For a sufficiently malicious or ill-informed choice of X, such as surveillance or recidivism prediction, almost any progress in machine learning might indirectly lead to a negative outcome, and our work is not excluded from that.",82 | |
Woodbury Transformations for Deep Generative Flows,https://proceedings.neurips.cc/paper/2020/file/3fb04953d95a94367bb133f862402bce-Paper.pdf,"This paper presents fundamental research on increasing the expressiveness of deep probabilistic models. Its impact is therefore linked to the various applications of such models. By enriching the class of complex deep models for which we can train with exact likelihood, we may enable a wide variety of applications that can benefit from modeling of uncertainty. However, a potential danger of this research is that deep generative models have been recently applied to synthesize realistic images and text, which can be used for misinformation campaigns.",83 | |
Walking in the Shadow: A New Perspective on Descent Directions for Constrained Minimization,https://proceedings.neurips.cc/paper/2020/file/96f2d6069db8ad895c34e2285d25c0ed-Paper.pdf,We believe that this work does not have any foreseeable negative ethical or societal impact.,84 | |
Estimating weighted areas under the ROC curve,https://proceedings.neurips.cc/paper/2020/file/5781a2637b476d781eb3134581b32044-Paper.pdf,A solid mathematical basis is beneficial to the development of practical statistical methods. We believe that the present work improves the understanding of ROC-curves and the optimization of score functions used in machine learning and medical diagnostics.,85 | |
Weakly-Supervised Reinforcement Learning for Controllable Behavior,https://proceedings.neurips.cc/paper/2020/file/1bd69c7df3112fb9a584fbd9edfc6c90-Paper.pdf,"We highlight two potential impacts for this work. Most immediately, weak supervision from humans may be an inexpensive yet effective step towards human-AI alignment [34, 55]. While prior work [3, 11, 42] has already shown how weak supervision in the form of preferences can be used to train agents, our work explores how a different type of supervision – invariance to certain factors – can be elicited from humans and injected as an inductive bias in an RL agent. One important yet delicate form of invariance is fairness. In many scenarios, we may want our agent to treat humans of different ages or races equally. While this fairness might be encoded in the reward function, our method presents an alternative, where fairness is encoded as observations’ invariance to certain protected attributes (e.g., race, gender). One risk with this work is misspecification of the factors of variation. If some factors are ignored, then the agent may require longer to solve certain tasks. More problematic is if spurious factors of variation are added to the dataset. In this case, the agent may be “blinded” to parts of the world, and performance may suffer. A question for future work is the automatic discovery of these spurious weak labels.",86 | |
Robustness Analysis of Non-Convex Stochastic Gradient Descent using Biased Expectations,https://proceedings.neurips.cc/paper/2020/file/bd4d08cd70f4be1982372107b3b448ef-Paper.pdf,"Based of the theoretical nature of the work, the authors do not believe this section is applicable to the present contribution, as its first goal is to provide some insights on a classical algorithm of the machine learning community and does not provide novel applications per se.",87 | |
Fighting Copycat Agents in Behavioral Cloning from Observation Histories,https://proceedings.neurips.cc/paper/2020/file/1b113258af3968aaf3969ca67e744ff8-Paper.pdf,"In this paper, we introduce a systematic approach to combat the “copycat” problem in behavioral cloning with observation histories. Behavioral cloning can be applied to a wide range of applications, such as robotics, natural language, decision making, as well as economics. Our method is particularly useful for offline behavioral cloning with partially observed states. Offline imitation is currently one of the most promising ways to achieve learned control in the world. Our method can improve the real world performance of behavior cloning agents, which could enable wider use of behavior cloning agents in practice. This could help to automate repetitive processes previously requiring human workers. While on the one hand, this has the ability to free up human time and creativity for more rewarding tasks, it also raises the concerning possibility of the loss of blue collar jobs. To mitigate the risks, it is important to promote policy and legislation to protect the interests of the workers who might be affected during the adoption of such technology.",88 | |
Adversarial Training is a Form of Data-dependent Operator Norm Regularization,https://proceedings.neurips.cc/paper/2020/file/ab7314887865c4265e896c6e209d1cd6-Paper.pdf,"The existence of adversarial examples, i.e. small perturbations of the input signal, often imperceptible to humans, that are sufficient to induce large changes in the model output, poses a real danger when deep neural networks are deployed in the real world, as potentially safety-critical machine learning systems become vulnerable to attacks that can alter the system’s behaviour in malicious ways. Understanding the origin of this vulnerability and / or acquiring an understanding of how to robustify deep neural networks against such attacks thus becomes crucial for a safe and responsible deployment of machine learning systems. Who may benefit from this research Our work contributes to understanding the origin of this vulnerability in that it sheds new light onto the attack algorithms used to find adversarial examples. It also contributes to building robust machine learning systems in that it allows practitioners to make more informed and well-founded decisions when training robust models. Who may be put at a disadvantage from this research Our work, like any theoretical work on adversarial examples, may increase the level of understanding of a malevolent person intending to mount adversarial attacks against deployed machine learning systems which may ultimately put the end-users of these systems at risk. We would like to note, however, that the attack algorithms we analyze in our work already exist and that we believe that the knowledge gained from our work is more beneficial to making models more robust than it could possibly be used to designing stronger adversarial attacks. Consequences of failure of the system Our work does not by itself constitute a system of any kind, other than providing a rigorous mathe- matical framework within which to better understand adversarial robustness.",89 | |
Off-Policy Interval Estimation with Lipschitz Value Iteration,https://proceedings.neurips.cc/paper/2020/file/59accb9fe696ce55e28b7d23a009e2d1-Paper.pdf,"Off-policy interval evaluation not only can advise end-user to deploy new policy, but can also serve as an intermediate step for latter policy optimization. Our proposed methods also fill in the gap of theoretical understanding of Markov structure in Lipschitz regression. We current work stands as a contribution to the fundamental ML methodology, and we do not foresee potential negative impacts.",90 | |
Myersonian Regression,https://proceedings.neurips.cc/paper/2020/file/67e235e7f2fa8800d8375409b566e6b6-Paper.pdf,"While our work is largely theoretical, we feel it can have downstream impact in the design of better marketplaces such as those for internet advertisement. Better pricing can increase both the efficiency of the market and the revenue of the platform. The latter is important since the revenue of platforms keeps such services (e.g. online newspapers) free for most users.",91 | |
Locally Differentially Private (Contextual) Bandits Learning,https://proceedings.neurips.cc/paper/2020/file/908c9a564a86426585b29f5335b619bc-Paper.pdf,"This work is mostly theoretical, with no negative outcomes. (Contextual) bandits learning has been widely used in real applications, which heavily relies on user’s data that may contain personal private information. To protect user’s privacy, we adopt the appealing solid notion of privacy – Local Differential Privacy (LDP) that can protect each user’s data before collection, and design (contextual) bandit algorithms under the guarantee of LDP. Our algorithms can be easily used in real applications, such as recommendation, advertising, to protect data privacy and ensure the utility of private algorithms simultaneously, which will befit everyone in the world.",92 | |
ImpatientCapsAndRuns: Approximately Optimal Algorithm Configuration from an Infinite Pool,https://proceedings.neurips.cc/paper/2020/file/ca5520b5672ea120b23bde75c46e76c6-Paper.pdf,"We expect that our theorems will guide the design of future algorithm configuration procedures. We note that speeding up computationally expensive algorithms saves time, money, and electricity, arguably reducing carbon emissions and yielding social benefit. The algorithms we study can be be applied to a limitless range of problems and so could yield both positive and negative impacts; however, we do not foresee our work particularly amplifying such impacts beyond the computational speedups already discussed.",93 | |
Faithful Embeddings for Knowledge Base Queries,https://proceedings.neurips.cc/paper/2020/file/fe74074593f21197b7b7be3c08678616-Paper.pdf,"Overview. This work addresses a general scientific question, query embedding (QE) for knowledge bases, and evaluates a new method, especially on a KB question-answering (KBQA) task. A key notion in the work the faithfulness of QE methods, that is, their agreement with deductive inference when the relevant premises are explicitly available. The main technical contribution of the paper is to show that massive improvements in faithfulness are possible, and that faithful QE systems can lead to substantial improvements in KBQA. In the following, we discuss how these advances may affect risks and benefits of knowledge representation and question answering technology. Query embedding. QE, and more generally KBE, is a way of generalizing the contents of a KB by building a probabilistic model of the statements in, or entailed by, a KB. This probabilistic model finds statements that could plausibly true, but are not explicitly stored: in essence it is a noisy classifier for possible facts. Two risks need to be considered in any deployment of such technology: first, the underlying KB may contain (mis)information that would improperly affect decisions; second, learned generalizations may be wrong or biased in a variety of ways that would lead to improperly justified decisions. In particular, training data might reflect societal biases that will be therebly incorporated into model predictions. Uses of these technologies should provide audit trails and recourse so that their predictions can be explained to and critiqued by affected parties. KB question-answering. General improvements to KBQA do not have a specific ethical burden, but like any other such technologies, their uses need to be subject to specific scrutiny. The general technology does require particular attention to accuracy-related risks. In particular, we propose a substantial “softening” of the typical KBQA architecture (which generally parses a question to produce a single hard KB query, rather than a soft mixture of embedded queries). In doing this we have replaced traditional KB, a mature and well-understood technology, with QE, a new and less well-understood technology. Although our approach makes learning end-to-end from denotations more convenient, and helps us reach a new state-of-the-art on some benchmarks, it is possible that replacing a hard queries to a KB with soft queries could lead to confusion as to whether answers arise from highly reliable KB facts, reliable reasoning over these facts, or are noise introduced by the soft QE system. As in KBE/QE, this has consequences for downstream tasks is uncertain predictions are misinterpreted by users. Faithful QE. By introducting the notion of faithfullness in studies of approximate knowledge representation in QE, we provided a conceptual yardstick for examining the accuracy and predictability of such systems. In particular, the centroid-sketch formalism we advocate often allows one to approximately distinguish entailed answers vs generalization-based answers by checking sketch membership. In addition to quantitatively improving faithfulness, EmQL ’s set representation thus may qualitatively improve the interpretability of answers. We leave further validation of this conjecture to future work.",94 | |
Multi-Task Temporal Shift Attention Networks for On-Device Contactless Vitals Measurement,https://proceedings.neurips.cc/paper/2020/file/e1228be46de6a0234ac22ded31417bc7-Paper.pdf,"Non-contact camera-based vital sign monitoring has great potential as a tool for telehealth. Our proposed system can promote global health equity and make healthcare more accessible for those in rural areas or those who find it difficult to travel to clinics and hospitals in-person (perhaps because of age, mobility issues or care responsibilities). These needs are likely to be particularly acute in low-resource settings. Non-contact sensing has other potential benefits for measuring the vitals of infants who ideally would not have contact sensors attached to their delicate skin. Furthermore, due to the exceptionally fast inference speed, the computational budget required for our proposed system is minimal. Therefore, people who cannot afford high-end computing devices still will be able to access the technology. While low-cost, ubiquitous sensing democratizes physiological measurement, it presents other challenges. If measurement can be performed from only a video, what happens if we detect a health condition in an individual when analyzing a video for other purposes. When and how should that information be disclosed? If the system fails in a context where a person is in a remote location, it may lead them to panic. It is also important to consider how such technology could be used by “bad actors” or applied with negligence and without sufficient forethought for the implications. Non-contact sensing could be used to measure personal physiological information without the knowledge of the subject. Law enforcement might be tempted to apply this in an attempt to detect individuals who appear “nervous” via signals such as an elevated heart rate or irregular breathing, or an employer may surreptitiously screen prospective employees for health conditions without their knowledge during an interview. These applications would set a very dangerous precedent and would be illegal in many cases. Just as is the case with traditional contact sensors, it must be made transparent when these methods are being used and subjects should be required to consent before physiological data is measured or recorded. There should be no penalty for individuals who decline to be measured. Ubiquitous sensing offers the ability to measure signals in more contexts, but that does not mean that this should necessarily be acceptable. Just because cameras may be able to measure these signals in a new context, or with less effort, it does not mean they should be subject to any less regulation than existing sensors, in fact quite the contrary. In the United States, the Health Insurance Portability and Accountability Act (HIPAA) and the HIPAA Privacy Rule sets a standard for protecting sensitive patient data and there should be no exception with regard to camera-based sensing. In the case of videos there should be particular care in how videos are transferred, given that significant health data can be contained with the channel. That was one of the motivations for designing our methods to run on-device, as it can minimize the risks involved in data transfer.",95 | |
On Power Laws in Deep Ensembles,https://proceedings.neurips.cc/paper/2020/file/191595dc11b4d6e54f01504e3aa92f96-Paper.pdf,"In this work, we provide an empirical and theoretical study of existing models (namely, deep ensembles); we propose neither new technologies nor architectures, thus we are not aware of its specific ethical or future societal impact. We, however, would like to point out a few benefits gained from our findings, such as optimization of resource consumption when training neural networks and contribution to the overall understanding of neural models. As far as we are concerned, no negative consequences may follow from our research.",96 | |
Self-training Avoids Using Spurious Features Under Domain Shift,https://proceedings.neurips.cc/paper/2020/file/f1298750ed09618717f9c10ea8d1d3b0-Paper.pdf,"Our work promotes robustness and fairness in machine learning. First, we study algorithms that make machine learning models robust when deployed in the real world. Second, our work addresses the scenario where the target domain is under-resourced and hence collecting labels is difficult. Third, our theoretical work guides efforts to mitigate dataset bias. We demonstrate that curating a diverse pool of unlabeled data from the true population can help combating existing bias in labeled datasets. We give conditions for when bias will be mitigated and when it will be reinforced or amplified by popular algorithms used in practice. We take a first step towards understanding and preventing the adverse effects of self-training.",97 | |
Estimation of Skill Distribution from a Tournament,https://proceedings.neurips.cc/paper/2020/file/60495b4e033e9f60b32a6607b587aadd-Paper.pdf,"The analysis of our algorithm, which forms the main contribution of this work, is theoretical in nature, and therefore, does not have any foreseeable societal consequences. On the other hand, applications of our algorithm to real-world settings could have potential societal impacts. As outlined at the outset of this paper, our algorithm provides a data-driven approach to address questions about perceived qualities of sporting events or other competitive enterprises, e.g., financial markets. Hence, a potential positive impact of our work is that subjective beliefs of stakeholders regarding the distributions of relative skills in competitive events can be moderated by a rigorous statistical method. In particular, our method could assist sports teams, sports tournament organizers, or financial firms to corroborate existing trends in the skill levels of players, debunk erroneous myths, or even unveil entirely new trends based on available data. However, our work may also have negative consequences if utilized without paying heed to its limitations. Recall that Step 1 of Algorithm 1 estimates BTL skill parameters of agents that participate in a tournament. Since the BTL model is a well-known approach for ranking agents [6, 7], it should be used with caution, as with any method that discriminates among agents. Indeed, the BTL model only takes into account wins or losses of pairwise games between agents, but does not consider the broader circumstances surrounding these outcomes. For example, in the context of soccer, the BTL model does not consider the goal difference in a game to gauge how significant a win really is, or take into account the injuries sustained by players. Yet, rankings of teams or players may be used by team managements to make important decisions such as assigning remunerations. Thus, users of algorithms such as ours must refrain from solely using rankings or skill distributions to make decisions that may adversely affect individuals. Furthermore, on the modeling front, it is worth mentioning that the BTL model for pairwise comparisons may be too simplistic in certain real-world scenarios. In such cases, there are several other models of pairwise comparisons within the literature that may be more suitable, e.g., the Thurstonian model, cf. [21], or more general stochastically transitive models, cf. [31]. We leave the analysis of estimating skill distributions or related notions for such models as future work in the area.",98 | |
Gaussian Gated Linear Networks,https://proceedings.neurips.cc/paper/2020/file/c0356641f421b381e475776b602a5da8-Paper.pdf,"Regression models have long been ubiquitous in both industry and academia, and we are optimistic that our work can provide improvement to existing practice and results. Like any supervised learning technique, the output of this model is a function of its input data, so appropriate due diligence is required during all stages of data collection, training and deployment, e.g. with respect to issues of algorithmic fairness and bias, as well as safety and robustness.",99 | |
Compressing Images by Encoding Their Latent Representations with Relative Entropy Coding,https://proceedings.neurips.cc/paper/2020/file/ba053350fe56ed93e64b3e769062b680-Paper.pdf,"Our work presents a novel data compression framework and hence inherits both its up and downsides. In terms of positive societal impacts, data compression reduces the bandwidth requirements for many applications and websites, making them more inexpensive to access. This increases accessibility to online content in rural areas with limited connectivity or underdeveloped infrastructure. Moreover, it reduces the energy requirement and hence the environmental impact of information processing systems. However, care must be taken when storing information in a compressed form for long time periods, and backwards-compatibility of decoders must be maintained, as data may otherwise be irrevocably lost, leading to what has been termed the Digital Dark Ages (Kuny, 1997).",100 | |
Throughput-Optimal Topology Design for Cross-Silo Federated Learning,https://proceedings.neurips.cc/paper/2020/file/e29b722e35040b88678e25a1ec032a21-Paper.pdf,"We have proposed topology design algorithms that can significantly speed-up federated learning in a cross-silo setting. Improving the efficiency of federated learning can foster its adoption, allowing different entities to share datasets that otherwise would not be available for training. Federated learning is intended to protect data privacy, as the data is not collected at a single point. At the same time a federated learning system, as any Internet-scale distributed system, may be more vulnerable to different attacks aiming to jeopardize training or to infer some characteristics of the local dataset by looking at the different messages [26, 92]. Encryption [10, 80, 8] and differential privacy [1] techniques may help preventing such attacks. Federated learning is less efficient than training in a highly-optimized computing cluster. It may in particular increase energy training costs, due to a more discontinuous usage of local computing resources and the additional cost of transmitting messages over long distance links. To the best of our knowledge, energetic considerations for federated learning have not been adequately explored, but for a few papers considering FL for mobile devices [42, 97].",101 | |
The Potts-Ising model for discrete multivariate data,https://proceedings.neurips.cc/paper/2020/file/9e5f64cde99af96fdca0e02a3d24faec-Paper.pdf,"In cancer clinical trials, patients are assigned to different treatment groups, and for each patient, toxicities are collected. These toxicities are graded, high-dimensional and correlated. Patient reported outcome questionnaires also collect patients’ responses to quality of life questions on a Likert-type scale after treatments. It is crucial to correctly model these kind of data and estimate the main effects as well as the association between the toxicities, in order to determine the tolerability of treatments and their impact on patients quality of life. Our Potts-Ising model is a suitable such model designed for the toxicity data, but applicable far beyond it to any survey and rating data with limited range, as well as, sparse count data.",102 | |
On the equivalence of molecular graph convolution and molecular wave function with poor basis set,https://proceedings.neurips.cc/paper/2020/file/1534b76d325a8f591b52d302e7181331-Paper.pdf,This study will provide benefit for ML researchers who are interested in quantum physics/chemistry and applications for materials science/informatics.,103 | |
Dual-Free Stochastic Decentralized Optimization with Variance Reduction,https://proceedings.neurips.cc/paper/2020/file/e22312179bf43e61576081a2f250f845-Paper.pdf,This work does not present any foreseeable societal consequence.,104 | |
Improved Analysis of Clipping Algorithms for Non-convex Optimization,https://proceedings.neurips.cc/paper/2020/file/b282d1735283e8eea45bce393cefe265-Paper.pdf,"Deep neural networks have achieved great success in recent years. In this paper, we provide a strong justification for the clipping technique in training deep neural networks and provides a satisfactory answer on how to efficiently optimize a general possibly non-convex ( L 0 , L 1 ) -smooth objective function. It closely aligns with the community’s pursuit of explainability, controllability, and practicability of machine learning. Besides its efficiency in training deep neural networks, a series of recent work ( Thakkar et al. [2019], Chen et al. [2020], Lee and Kifer [2020] ) also studies the relation between clipping and privacy preservation, which appears to be a major concern in machine learning applications. Therefore, we hope that a thorough understanding of clipping methods will be beneficial to the modern society.",105 | |
Learning Robust Decision Policies from Observational Data,https://proceedings.neurips.cc/paper/2020/file/d3696cfb815ab692407d9362e6f06c28-Paper.pdf,"We believe the work presented herein can provide a useful tool for decision support, especially in safety-critical applications where it is of interest to reduce the risk of incurring high costs. The methodology can leverage large and heterogeneous data on past decisions, contexts and outcomes, to improve human decision making, while providing an interpretable statistical guarantee for its recommendations. It is important, however, to consider the population from which the training data is obtained and used. If the method is deployed in a setting with a different population it may indeed fail to provide cost-reducing decisions. Moreover, if there are categories of features that are sensitive and subject to unwarranted biases, the population may need to be split into appropriate subpopulations or else the biases can be reproduced in the learned policies.",106 | |
A Benchmark for Systematic Generalization in Grounded Language Understanding,https://proceedings.neurips.cc/paper/2020/file/e5a90182cc81e12ab5e72d66e0b46fe3-Paper.pdf,"Systematic generalization characterizes human language and thought, but it remains a challenge for modern AI systems. The gSCAN benchmark is designed to stimulate further research on this topic. Advances in machine systematic generalization could facilitate improvements in learning efficiency, robustness, and human-computer interaction. We do not anticipate that the broader impacts would selectively benefit some groups at the expense of others.",107 | |
A Class of Algorithms for General Instrumental Variable Models,https://proceedings.neurips.cc/paper/2020/file/e8b1cbd05f6e6a358a81dee52493dd06-Paper.pdf,"Cause effect estimation is crucial in many areas where data-driven decisions may be desirable such as healthcare, governance or economics. These settings commonly share the characteristic that experimentation with randomized actions is unethical, infeasible or simply impossible. One of the promises of causal inference is to provide useful insights into the consequences of hypothetical actions based on observational data. However, causal inference is inherently based on assumptions, which are often untestable. Even a slight violation of the assumptions may lead to drastically different conclusions, potentially changing the desired course of action. Especially in high-stakes scenarios, it is thus indispensable to thoroughly challenge these assumptions. This work offers a technique to formalize such a challenge of standard assumptions in continuous IV models. It can thus help inform highly-influential decisions. One important characteristic of our method is that while it can provide informative bounds under certain assumptions on the functional form of effects, the bounds will widen as less prior information supporting such assumptions is available. We can view this as a way of deferring judgment until stricter assumptions have been assessed and verified. Since our algorithms are causal inference methods, they requires assumptions too. Therefore, our method also requires a careful assessment of these assumptions by domain-experts and practitioners. In addition, as we are optimizing a non-convex problem with local methods, we have no theoretical guarantee of correctness of our bounds. Hence, if wrong assumptions for our model are accepted prematurely, or our optimization strategy fails to find global optima, our method may wrongly inform decisions. If these are high-stakes decisions, then wrong decisions can have significant negative consequences (e.g., a decision not to treat a patient that should be treated). If the data that this model is trained on is biased against certain groups (e.g., different sexes, races, genders) this model will replicate those biases. We believe a fruitful approach towards making our model more sensitive to uncertainties due to structurally-biased, unrepresentative data, is to learn how to derive, then inflate (to account for bias) uncertainty estimates for our bounds.",108 | |
A Generalized Neural Tangent Kernel Analysis for Two-layer Neural Networks,https://proceedings.neurips.cc/paper/2020/file/9afe487de556e59e6db6c862adfe25a4-Paper.pdf,"Deep learning has achieved tremendous success in various real-world applications such as image recognition, natural language processing, self-driving cars and disease diagnosis. However, many deep learning models are not interpretable, which greatly limits their application and can even cause danger in safety-critical applications. This work aims to theoretically explain the success of learning neural networks, and can help add transparency to deep learning methods that have been implemented and deployed in real applications. Our result makes deep learning more interpretable, which is crucial in applications such as self-driving cars and disease diagnosis. Moreover, our results can potentially guide the design of new deep learning models with better performance guarantees. As a paper focusing on theoretical results, no risk can be directly caused. However, if the theoretical results are over-interpreted and blindly used to design deep learning models for specific applications, bad performance may be expected as there is still some gap between theory and practice.",109 | |
Big Self-Supervised Models are Strong Semi-Supervised Learners,https://proceedings.neurips.cc/paper/2020/file/fcbc95ccdd551da181207c0c1400c655-Paper.pdf,"The findings described in this paper can potentially be harnessed to improve accuracy in any ap- plication of computer vision where it is more expensive or difficult to label additional data than to train larger models. Some such applications are clearly beneficial to society. For example, in medical applications where acquiring high-quality labels requires careful annotation by clinicians, better semi-supervised learning approaches can potentially help save lives. Applications of computer vision to agriculture can increase crop yields, which may help to improve the availability of food. However, we also recognize that our approach could become a component of harmful surveillance systems. Moreover, there is an entire industry built around human labeling services, and technology that reduces the need for these services could lead to a short-term loss of income for some of those currently employed or contracted to provide labels.",110 | |
Learning Deep Attribution Priors Based On Prior Knowledge,https://proceedings.neurips.cc/paper/2020/file/a19883fca95d0e5ec7ee6c94c6c32028-Paper.pdf,"DAPr can be applied to a wide variety of problems for which prior knowledge is available about a dataset’s individual features. In our work we focus on applying our method to a synthetic dataset and two real-world medical datasets, though it should be easily extendable to other problem domains. As discussed in the introduction, a major barrier to the adoption of modern machine learning tech- niques in real-world settings is that of trust . In high-stakes domains, such as medicine, practitioners are wary of replacing human judgement with that of black box algorithms, even if the black box consistently outperforms the human in controlled experiments. This concern is well-founded, as many high-performing systems developed in research environments have been found to overfit to quirks in a particular dataset, rather than learn more generalizable patterns. In our work we demonstrate that the DAPr framework does help deep networks generalize to our test sets when sample sizes are limited. While these results are encouraging, debugging model behavior in the real world, where data cannot simply be divided into training and test sets, is a more challenging problem. Feature attribution methods are one potential avenue for debugging models; however, while it may be easy to tell from a set of attributions if e.g. an image model is overfitting to noise, it would be much more difficult for a human to determine that a model trained on gene expression data was learning erroneous patterns simply by looking at attributions for individual genes. By learning to explain a given feature’s global importance using meta-features, we believe that DAPr can provide meaningful insights into model behavior that can help practitioners debug their models and potentially deploy them in real-world settings. Nevertheless, we recognize the potential downsides with the adoption of complex machine learning interpretability tools. Previous results have demonstrated that interpretability systems can in fact lead users to have too much trust in models when a healthy dose of skepticism would be more appropriate. More research is needed to understand how higher-order explanation tools like DAPr influence user behavior to determine directions for future work.",111 | |
Model Inversion Networks for Model-Based Optimization,https://proceedings.neurips.cc/paper/2020/file/373e4c5d8edfa8b74fd4b6791d0cf6dc-Paper.pdf,"In this work we introduced model-inversion networks (MIN), a novel approach to model-based optimization, which can be utilized for solving both passive “data-driven” optimization problems and active model-based optimization problems. Model-based optimization is a generic black-box optimization framework that captures a wide range of practically interesting and highly relevant optimization problems, such as drug discovery, controller design, and optimization in computer systems. In this work, we demonstrate the efficacy of MINs in high dimensional optimization problems, from complex input types (raw pixels, raw neural network weights, protein sequences). We are particularly excited about application of MINs in the domain of drug design and discovery, and other computational biology domains. The importance of the problem of drug design needs no motivation or justification, especially in these times of this pandemic that mankind is facing now. Existing methods in place for these problems typically follow an “active” experimental pipeline – the designs proposed by an ML or computational model are evaluated in real life, and then the results of these evaluations are incorporated into further model training or model improvement. Often the evaluation phase is the bottleneck: this phase is highly expensive both in terms of computational resources and time, often requiring human intervention, and in some cases, taking months of time. We can avoid these bottlenecks by instead solving such optimization problems to the best possible extent in the static data-driven setting, by effectively reusing both good and bad data from past experiments, and MINs are designed to be efficient at exactly this. Beyond the field of biology, there are several other applications for which our proposed method is relevant. Design problems in engineering, such as design of aircraft, are potential applications of our method. There are also likely to be applications in computer systems and architectures. While effective model-based optimization algorithms can have considerable positive economic and technological growth effects, it can also enable applications with complex implications with regards to safety and privacy (for example, safety issues in drug design or aircraft design, or privacy issues in computer systems optimization), as well as in terms of complex economic effects for example, changing job requirements and descriptions, due to automation of certain tasks. Both of these implications are not unique to our method, but more generally apply to situations where black box autonomous neural network models are deployed in practice.",112 | |
Security Analysis of Safe and Seldonian Reinforcement Learning Algorithms,https://proceedings.neurips.cc/paper/2020/file/65ae450c5536606c266f49f1c08321f2-Paper.pdf,"In our paper, we discussed the application of Seldonian algorithms to the treatment of diabetes patients. We emphasize that the mathematical safety guarantees provided by Seldonian RL are not a replacement for domain-specific safety requirements (e.g., the diabetes treatment would still need oversight for medical safety), but still improve the potential for RL to be applied to problems with real-world consequences. Seldonian RL has also been proposed for creating fair algorithms, i.e., those that aim to reduce discriminative behavior in intelligent tutoring systems and loan approvals [31]. In the last decade, data breaches on the Democratic National Committee’s emails, and on companies such as Equifax and Yahoo! have made cyber attacks on systems and databases a very legitimate and ubiquitous concern [45; 33; 14]. Therefore, when creating safe AI algorithms that can directly impact people’s lives, we should ensure not only performance guarantees with high probability, but also the development of metrics that evaluate the “quality” of training data, which often reflect systemic biases and human error.",113 | |
Deep Transformation-Invariant Clustering,https://proceedings.neurips.cc/paper/2020/file/5a5eab21ca2a8fef4af5e35709ecca15-Paper.pdf,"The impact of clustering mainly depends on the data it is applied on. For instance, adding structure in user data can raise ethical concerns when users are assimilated to their cluster, and receive targeted advertisement and newsfeed. However, this is not specific to our method and can be said of any clustering algorithm. Also note that while our clustering can be applied for example to data from social media, the visual interpretation of the clusters it returns via the cluster centers respects privacy much better than showing specific examples from each cluster. Because our method provides highly interpretable results, it might bring increased understanding of clustering algorithm results for the broader public, which we think may be a significant positive impact.",114 | |
Improved guarantees and a multiple-descent curve for Column Subset Selection and the Nyström method,https://proceedings.neurips.cc/paper/2020/file/342c472b95d00421be10e9512b532866-Paper.pdf,"Our study offers a deeper theoretical understanding of the Column Subset Selection Problem. The nature of our work is primarily theoretical, with direct applications to feature selection and kernel approximation, as we noted in Section 1. The primary reason for feature selection as a method for ap- proximating a given matrix, as opposed to a low rank approximation using an SVD, is interpretability, which is crucial in many scientific disciplines. Our analysis shows that in many practical settings, feature selection performs almost as well as SVD at approximating a matrix. As such, our work makes a stronger case for feature selection, wherever applicable, for the sake of interpretability. We also hope our work motivates further research into a fine grained analysis to quantify if machine learning problems are really as hard as worst-case bounds suggest them to be.",115 | |
Deep Archimedean Copulas,https://proceedings.neurips.cc/paper/2020/file/10eb6500bd1e4a3704818012a1593cc3-Paper.pdf,"Copulas have held the dubious honor of being partially responsible for the financial crisis of 2008 [23]. Back then, it was commonplace for analysts and traders to model prices of collateralized debt obligations (CDOs) by means of the Gaussian copula [22]. Gaussian copulas were extremely simple and gained popularity rapidly. Yet today, this method is widely criticised as being overly simplistic as it effectively summarizes associations between securities into a single number. Of course, copulas now have found a much wider range of applications, many of which are more grounded than credit and risk modeling. Nonetheless, the criticism that Gaussian—or for that matter, any simple parametric measure of dependency is too simple, still stands. ACNet is one attempt to tackle this problem, possibly beyond financial applications. While still retaining the theoretical properties of Archimedean copula, ACNet can model dependencies which have no simple parametric form, and can alleviate some difficulties researchers have when facing the problem of model selection. We hope that with a more complex model, the use of ACNet will be able to overcome some of the deficiencies exhibited by Gaussian copula. Nonetheless, we continue to stress caution in the careless or flagrant application of copulas—or the overreliance on probabilistic modeling—in domains where such assumptions are not grounded. At a level closer to machine learning, ACNet essentially models (a restricted set of) cumulative distributions. As described in the paper, this has various applications (see for example, Scenario 2 in Section 3 of our paper), since it is computationally easy to obtain (conditional) densities from the distribution function, but not the other way round. We hope that ACNet will motivate researchers to explore alternatives to learning density functions and apply them where appropriate.",116 | |
On the Expressiveness of Approximate Inference in Bayesian Neural Networks,https://proceedings.neurips.cc/paper/2020/file/b6dfd41875bc090bd31d0b1740eb5b1b-Paper.pdf,"Bayesian approaches to deep learning problems are often proposed in situations where uncertainty estimation is critical. Often the justification given for this approach is the probabilistic framework of Bayesian inference. However, in cases where approximations are made, the quality of these approximations should also be taken into account. Our work illustrates that the uncertainty estimates given by approximate inference with commonly used algorithms may not qualitatively resemble the uncertainty estimates implied by Bayesian modelling assumptions. This may possibly have adverse consequences if Bayesian neural networks are used in safety-critical applications. Our work motivates a careful consideration of these situations.",117 | |
Efficient Learning of Discrete Graphical Models,https://proceedings.neurips.cc/paper/2020/file/9d702ffd99ad9c70ac37e506facc8c38-Paper.pdf,"We believe that this work, as presented here, does not present any foreseeable societal consequence.",118 | |
Enabling certification of verification-agnostic networks via memory-efficient semidefinite programming,https://proceedings.neurips.cc/paper/2020/file/397d6b4c83c91021fe928a8c4220386b-Paper.pdf,"Our work enables verifying properties of verification-agnostic neural networks trained using procedures agnostic to any specification verification algorithm. While the present scalability of the algorithm does not allow it to be applied to SOTA deep learning models, in many applications it is vital to verify properties of smaller models running safety-critical systems (learned controllers running on embedded systems, for example). The work we have presented here does not address data related issues directly, and would be susceptible to any biases inherent in the data that the model was trained on. However, as a verification technique, it does not enhance biases present in any pre-trained model, and is only used as a post-hoc check. We do not envisage any significant harmful applications of our work, although it may be possible for adversarial actors to use this approach to verify properties of models designed to induce harm (for example, learning based bots designed to break spam filters or induce harmful behavior in a conversational AI system).",119 | |
A Robust Functional EM Algorithm for Incomplete Panel Count Data,https://proceedings.neurips.cc/paper/2020/file/e56eea9a45b153de634b23780365f976-Paper.pdf,"Understanding the dynamics for individuals who attempt to change and maintain behaviors to improve health has important societal value, for example, a comprehensive understanding of how smokers attempt to quit smoking may guide behavioral scientists to design better intervention strategies that tailor to the highest risk windows of relapse. Our theory and method provide an approach to understanding a particular aspect of the smoking behavior (mean function). The resulting algoithm is robust to Poisson process violations, readily adaptable and simple to implement, highlighting the potential for its wider adoption. The negative use case could be lack of sensitivity analysis around the assumptions such as missing data mechanism which may lead to misleading conclusions. Our current recommendation is to consult scientists about the plausibility of the assumption about missing data.",120 | |
Finding Second-Order Stationary Points Efficiently in Smooth Nonconvex Linearly Constrained Optimization Problems,https://proceedings.neurips.cc/paper/2020/file/1da546f25222c1ee710cf7e2f7a3ff0c-Paper.pdf,"Our main contributions in this work include both new theoretical and numerical results for solving nonconvex optimization problems under linear constraints. The theoretical part is regarding the new insight of a mathematical problem and the proposed algorithm is very general in the sense it can be applied not only to machine learning problems, but also to other general linear constrained problems in any other fields. Therefore, this works would be beneficial for both scientists/professors who are performing research in the area of machine learning and students who are studying operation research, engineering, data science, finance, etc. The theories and ideas in this work can potentially lead to significant improvements on the “off-the-shelf” optimization solvers and packages by equip- ping them with efficient modules for escaping saddle points in the presence of linear constraints. In addition to the methodological developments, the viewpoint of looking at the generic optimization problem instances could have potential broader impact on analyzing and resolving other issues in the continuous optimization field as well. While this work handles one specific hard task (i.e. finding SOSPs) by analyzing generic problem instances, this viewpoint could result in new tools and theories for dealing with other hard tasks for generic optimization instances. We haven’t found any negative impact of this work on both ethical aspects and future societal con- sequences.",121 | |
Rescuing neural spike train models from bad MLE,https://proceedings.neurips.cc/paper/2020/file/186b690e29892f137b4c34cfa40a3a4d-Paper.pdf,"Bridging the gap between statistical neuroscientific models such as autoregressive point processes and dynamical systems is a substantial challenge not only from the perspective of generative modelling but also in terms of allowing a dynamical interpretation, that carries with it all the niceties that are a ff orded by stochastic dynamical systems. As such, while the motivation we drew up on comes from neuroscience, modelling, simulating and analyzing point process dynamics has a broad applicability to biological sciences and other fields.Our method has potential use in modelling within social sciences, geophysics (e.g. earthquakes), astrophysics and finance. In many of those areas stable inference and simulation of future events would directly enable the ability to discern and shape social and economic trends, or e ff ect policy safeguarding against baleful events.",122 | |
A Bayesian Nonparametrics View into Deep Representations,https://proceedings.neurips.cc/paper/2020/file/0ffaca95e3e5242ba1097ad8a9a6e95d-Paper.pdf,"This work have direct applications in deep generative models. Probabilistic models of latent spaces may inform development of architectures and training methods that improve sample fidelity and control over sample semantics. While generative modelling have many positive applications – e.g. in computer aided art and conversational systems – any work on generative models may potentially be used to produce deceptive and fraudulent content. This work also adds to the evidence that convolutional networks excel at exploiting patterns in data. However, it is important to recognize that our results do not speak to the issue of biases that may be inherited from training examples. In particular, undue trust in data-driven systems – including neural networks – runs the risk of reinforcing biases and prejudice existing in training data.",123 | |
Efficient Variational Inference for Sparse Deep Learning with Theoretical Guarantee,https://proceedings.neurips.cc/paper/2020/file/05a624166c8eb8273b8464e8d9cb5bd9-Paper.pdf,"We believe the ethical aspects are not applicable to this work. For future societal consequences, deep learning has a wide range of applications such as computer version and natural language processing. Our work provides a solution to overcome the drawbacks of modern deep neural network, and also improves the understanding of deep learning. The proposed method could improve the existing applications. Specifically, sparse learning helps apply deep neural networks to hardware limited devices, like cell phones or pads, which will broaden the horizon of deep learning application. In addition, as a Bayesian method, not only a result, but also the knowledge of confidence or certainty in that result are provided, which could benefit people in various aspects. For example, in the application of cancer diagnostic, by providing the certainty associated with each possible outcome, Bayesian learning would assist the medical professionals to make a better judgement about whether the tumor is a cancer or a benign one. Such kind of ability to quantify uncertainty would contribute to the modern deep learning.",124 | |
Neural Architecture Generator Optimization,https://proceedings.neurips.cc/paper/2020/file/8c53d30ad023ce50140181f713059ddf-Paper.pdf,"As highlighted in [7], NAS literature has focused for a long time on achieving higher accuracies, no matter the source of improvement. This has lead to the widespread use of narrowly engineered search spaces, in which all considered architectures share the same human defined macro-structure. While this does lead to higher accuracies, it prevents those methods from ever finding truly novel architecture. This is detrimental both for the community, which has focused many works on marginally improving performance in a shallow pond, but also for the environment [61]. As NAS is undoubtedly computationally intensive, researchers have the moral obligation to make sure these resources are invested in meaningful pursuits: our flexible search space, based on hierarchical graphs, has the potential to find truly novel network paradigms, leading to significant changes in the way we design networks. It is worth mentioning that, as our search space if fundamentally different from previous ones, it is not trivial to use the well-optimised training techniques (e.g. DropPath, Auxiliary Towers, etc.) which are commonly used in the field. While transferring those techniques is viable, we do believe that our new search space will open up the development of novel training techniques. We do however acknowledge that the computational costs of using our NAS approach are still relatively high - this may not be attractive to the industrial or academic user with limited resources. On the other hand, by converting NAS to a low-dimensional hyperparameter optimisation problem, we have significantly reduced the optimisation difficulty and opened up the chance of applying more optimisation techniques to NAS. Although only demonstrated with BOHB and MOBO in this work, we believe more query-efficient methods, such as BO works based on transfer learning [62, 63, 64, 65, 66, 67] can be deployed directly on our search space to further reduce the computation costs.",125 | |
f -GAIL: Learning f -Divergence for Generative Adversarial Imitation Learning,https://proceedings.neurips.cc/paper/2020/file/967990de5b3eac7b87d49a13c6834978-Paper.pdf,"This paper aims to advance the imitation learning techniques, by learning an optimal discrepancy measure from f -divergence family, which has a wide range of applications in robotic engineering, system automation and control, etc. The authors do not expect the work will address or introduce any societal or ethical issues.",126 | |
Learning to Select the Best Forecasting Tasks for Clinical Outcome Prediction,https://proceedings.neurips.cc/paper/2020/file/abc99d6b9938aa86d1f30f8ee0fd169f-Paper.pdf,"This work presents a method for efficiently learning patient representations using EMR data. Although this is demonstrated with a subset of the full raw EMR, and for only a handful of clinical outcomes in intensive care patients, it is a proof-of-concept that may be useful for a range of other predictive modeling using various types of longitudinal health data. The impact may be greatest in low-data scenarios - e.g. clinical use-cases where labeling is very challenging or where there are few eligible patients in the EMR. The code for this method will be made available to the research community on GitHub. There are numerous ethical considerations associated with any EMR modeling, which have been discussed in the literature [38, 39]. Issues include numerous biases in the observational EMR data, e.g. on the basis of gender, ethnicity or socioeconomic status, which can propagate into predictive models. These fairness considerations also apply to representation learning architectures as presented here. Finally, if this method were to be brought forward to real world deployment in conjunction with a decision support tool, it would have to be subject to appropriate clinical safety review and trials across different populations, with consideration given to issues such as drift and robustness.",127 | |
Towards Understanding Hierarchical Learning: Benefits of Neural Representations,https://proceedings.neurips.cc/paper/2020/file/fb647ca6672b0930e9d00dc384d8b16f-Paper.pdf,"This paper extensively contributes to the theoretical frontier of deep learning. We do not foresee direct ethical or societal consequences. Instead, our theoretical finding reduces the gap between the theory and practice, and is in sharp contrast to existing theories, which cannot show any advantage of deep networks over the shallow ones. In viewing of a notably increasing trend towards establishing a quantitative framework using deep neural networks in diverse areas, e.g., computational social science, this paper will provide an important theoretical guideline for practitioners.",128 | |
IDEAL: Inexact DEcentralized Accelerated Augmented Lagrangian Method,https://proceedings.neurips.cc/paper/2020/file/ed77eab0b8ff85d0a6a8365df1846978-Paper.pdf,"Centralization of data is not always possible because of security and legacy concerns [14]. Our work proposes a new optimization algorithm in the decentralized setting, which can learn a model without revealing the privacy sensitive data. Potential applications include data coming from healthcare, environment, safety, etc, such as personal medical information [19, 20], keyboard input history [22, 32] and beyond.",129 | |
Adversarial Weight Perturbation Helps Robust Generalization,https://proceedings.neurips.cc/paper/2020/file/1ef91c212e30e14bf125e9374262401f-Paper.pdf,"Adversarial training is the currently most effective and promising defense against adversarial examples. In this work, we propose AWP to improve the robustness of adversarial training, which may help to build a more secure and robust deep learning system in real world. At the same time, AWP introduces extra computation, which probably has negative impacts on the environmental protection ( e.g. , low-carbon). Further, the authors do not want this paper to bring overoptimism about AI safety to the society. The majority of adversarial examples are based on known threat models ( e.g. L p in this paper), and the robustness is also achieved on them. Meanwhile, the deployed machine learning system faces attacks from all sides, and we are still far from complete model robustness.",130 | |
Residual Force Control for Agile Human Behavior Imitation and Extended Motion Synthesis,https://proceedings.neurips.cc/paper/2020/file/f76a89f0cb91bc419542ce9fa43902dc-Paper.pdf,"The proposed techniques, RFC and dual policy control, enable us to create virtual humans that can imitate a variety of agile human motions and autonomously exhibit long-term human behaviors. This is useful in many applications. In the context of digital entertainment, animators could use our approach to automatically animate numerous background characters to perform various motions. In game production, designers could make high-fidelity physics-based characters that interact with the environment robustly. In virtual reality (VR), using techniques like ours to improve motion fidelity of digital content could be important for applications such as rehabilitation, sports training, dance instruction and physical therapy. The learned motion policies could also be used for the preservation of cultural heritage such as traditional dances, ceremonies and martial arts. Our research on physics-based human motion synthesis combined with advances of human digitaliza- tion in computer graphics could be used to generate highly realistic human action videos which are visually and physically indistinguishable from real videos. Similar to the creation of ‘deepfakes’ using image synthesis technology, the technology developed in this work could enable more advanced forms of fake video generation, which could lead to the propagation of false information. To mitigate this issue, it is important that future research should continue to investigate the detection of synthesized videos of human motions.",131 | |
Every View Counts: Cross-View Consistency in 3D Object Detection with Hybrid-Cylindrical-Spherical Voxelization,https://proceedings.neurips.cc/paper/2020/file/f2fc990265c712c49d51a18a32b39f0c-Paper.pdf,"3D detection is the first stage in the computational pipeline for a self-driving car. Just as perception enables humans to make instant associations and act on them, the ability to identify what and where the visual targets are from immediate surroundings is a fundamental pillar for the safe operation of an autonomous vehicle. The pandemic of COVID-19 manifests greater needs for autonomous driving and delivery robots, when contact-less delivery is encouraged. Though there is controversy about the ethics for autonomous vehicles especially for their decision making, robust 3D detection with higher accuracy is always desired to improve safety. In addition, LiDAR point clouds do not capture person identity and thus 3D detection on LiDAR point clouds does not involves privacy issue.",132 | |
Profile Entropy: A Fundamental Measure for the Learnability and Compressibility of Distributions,https://proceedings.neurips.cc/paper/2020/file/4dbf29d90d5780cab50897fb955e4373-Paper.pdf,"Classical information theory states that an i.i.d. sample contains H ( X n ∼ p ) = nH ( p ) information, which provides little insight for statistical applications. We present a different view by decomposing the sample information into three parts: the labeling of the profile elements, ordering of them, and profile entropy. With no bias towards any symbols, the profile entropy rises as a fundamental measure unifying the concepts of estimation, inference, and compression. We believe this view could help researchers in information theory, statistical learning theory, and computer science communities better understand the information composition of i.i.d. samples over discrete domains. The results established in this work are general and fundamental, and have numerous applications in privacy, economics, data storage, supervised learning, etc. A potential downside is that the theoretical guarantees of the associated algorithms rely on the assumption correctness, e.g., the domain should be discrete and the sampling process should be i.i.d. . In other words, it will be better if users can confirm these assumptions by prior knowledge, experiences, or statistical testing procedures. Taking a different perspective, we think a potential research direction following this work is to extend these results to Markovian models, making them more robust to model misspecification.",133 | |
Distributed Newton Can Communicate Less and Resist Byzantine Workers,https://proceedings.neurips.cc/paper/2020/file/d17e6bcbcef8de3f7a00195cfa5706f1-Paper.pdf,"The advent of computationally-intensive machine learning (ML) models has changed the technology landscape in the past decade. The most powerful learning models are also the most expensive to train. For example, OpenAI’s GPT-3 language model has 175 billion parameters and takes USD 12 million to train 1 ! On top of that machine learning training has a costly environmental footprint: recent study shows that training a transformer with neural architecture search can have as much as five times CO 2 emission of a standard car in its lifetime 2 . While the really expensive models are relatively rare, training of moderately large ML models is now ubiquitous over the data science industry and elsewhere. Most of the training of machine learning model today is performed in distributed platforms (such as Amazon’s EC2). Any savings in energy - in forms of computation or communication - in distributed optimization will have a large positive impact. This paper seeks to speed up distributed optimization algorithms by minimizing inter-server communication and at the same time makes the optimization algorithms robust to adversarial failures. The protocols resulting from this paper are immediately implementable and can be adapted to any large scale distributed training of a machine learning model. Further, since our algorithms are robust to Byzantine failure, the training process becomes more reliable and fail-safe. In addition to that, we think the theoretical content of this paper is instructive and some elements can be included in the coursework of a graduate class of distributed optimization, to exemplify the trade-off between some fundamental quantities in distributed optimization.",134 | |
TSPNet: Hierarchical Feature Learning via Temporal Semantic Pyramid for Sign Language Translation,https://proceedings.neurips.cc/paper/2020/file/8c00dee24c9878fea090ed070b44f1ab-Paper.pdf,"As of the year 2020, 466 million people worldwide, one in every ten people, has disabling hearing loss. And by the year of 2050, it is estimated that this number will grow to over 900 million [ 42]. Assisting deaf and hard-of-hearing people to participate fully and feel entirely included in our society is critical and can be facilitated by maximizing their ability to communicate with others in sign languages, thereby minimizing the impact of disability and disadvantage on performance. Communication difficulties experienced by deaf and hard-of-hearing people may lead to unwelcome feelings of isolation, frustration and other mental health issues. Their global cost, including the loss of productivity and deaf service support packages, is US$ 750 billion per annum in the healthcare expenditure alone [42]. The technique developed in this work contributes to the design of automated sign language interpretation systems. Successful applications of such communication technologies would facilitate access and inclusion of all community members. Our work also promotes the public awareness of people living with hearing or other disabilities, who are commonly under-representative in social activities. With more research works on automated sign language interpretation, our ultimate goal is to encourage equitable distribution of health, education, and economic resources in the society. Failure in translation leads to potential miscommunication. However, achieving highly-accurate automated translation systems that are trustworthy even in life-critical emergency and health care situations requires further studies and regulation. In scenarios of this kind, automated sign language translators are recommended to serve as auxiliary communication tools, rather an alternative to human interpreters. Moreover, RPWT dataset was sourced from TV weather forecasting, consequently, is biased towards this genre. Hence, its applicability to real-life use may be limited. Despite this linguistic limitation, RPWT remains the only existing large-scale dataset for sign language translation; this under-resourced area deserves more attention. Both datasets and models ought to be developed.",135 | |
Certifiably Adversarially Robust Detection of Out-of-Distribution Data,https://proceedings.neurips.cc/paper/2020/file/b90c46963248e6d7aab1e0f429743ca0-Paper.pdf,"In order to use machine learning in safety-critical systems it is required that the machine learning system correctly flags its uncertainty. As neural networks have been shown to be overconfident far away from the training data, this work aims at overcoming this issue by not only enforcing low confidence on out-distribution images but even guaranteeing low confidence in a neighborhood around it. As a neural network should not flag that it knows when it does not know, this paper contributes to a safer use of deep learning classifiers.",136 | |
Interpretable Sequence Learning for COVID-19 Forecasting,https://proceedings.neurips.cc/paper/2020/file/d9dbc51dc534921589adf460c85cd824-Paper.pdf,"COVID-19 is an epidemic that is affecting almost all countries in the world at the moment. As of the first week of June, more than 6.5 million people have been infected, resulting in more than 380k fatalities unfortunately. The economical and sociological impacts of COVID-19 are significant, and will be felt for many years to come. Forecasting of the severity of COVID-19 is crucial, for healthcare providers to deliver the healthcare support for those who will be in the most need, for governments to take the most optimal policy actions while minimizing the negative impact of the outbreak, and for business owners to make crucial decisions on when and how to restart their businesses. With the motivation of helping all these actors, we propose a machine learning-based forecasting model that significantly outperforms any alternative methods, including the ones used by the healthcare providers and public sector. Not only are our forecasts far more accurate, our model is explainable by design. It is aligned with how epidemiology experts approach the problem, and the machine learnable components shed light on what data features have the most impact on the outcomes. These would be crucial for data-driven understanding of COVID-19, that can help domain experts for effective medical and public health decision-making. Besides COVID-19 forecasting, our approach is in the direction of ingesting data-driven learning while using the inductive bias of differential equations, while representing the input-output relationships at a system-level. Not only infectious disease modeling, but numerous scientific fields that use such equations, such as Physics, Environmental Sciences, Chemistry etc. are expected to benefit from our contributions.",137 | |
Domain Generalization for Medical Imaging Classification with Linear-Dependency Regularization,https://proceedings.neurips.cc/paper/2020/file/201d7288b4c18a679e48b31c72c30ded-Paper.pdf,"Our proposed method shows reasonable potential in the application of clinically realistic environments especially under the scenarios where only limited training samples are available and the capturing vendors and environments are diverse. In the short-term, the potential beneficiary of the proposed research lies in that it could significantly alleviate the domain shift problem in medical image analysis, as evidenced in this paper. In the long term, it is expected that the principled methodology could offer new insights in intelligent medical diagnostic systems. One concrete example is that the medical imaging classification functionality can be incorporated into different types of smartphones (with different capturing sensors, resolutions, etc.) to assess risk of skin disease (e.g. skin cancer in suspicious skin lesions) such that the terminal stage of skin cancer can be avoided. However, the medical data can be protected by privacy regulation such that the protected attributes (e.g. gender, ethnicity) may not be released publicly for training purpose. In this sense, the trained model may lack of fairness, or worse, may actively discriminate against a specific group of people (e.g. ethnicity with relatively small proportion of people). In the future, the proposed methodology can be feasibly extended to improve the algorithm fairness for numerous medical image analysis tasks and meanwhile guarantee the privacy of the protected attributes.",138 | |
Learning Multi-Agent Coordination for Enhancing Target Coverage in Directional Sensor Networks,https://proceedings.neurips.cc/paper/2020/file/7250eb93b3c18cc9daa29cf58af7a004-Paper.pdf,"The target coverage problem is common in Directional Sensor Networks. This problem widely exists in a lot of real-world applications. For example, those who control the cameras to capture the sports match videos may benefit from our work, because our framework provides an automatic control solution to free them from heavy and redundancy labor. Surveillance camera networks may also benefit from this research. But there is also the risk of being misused in the military field, e.g., using directional radar to monitor missiles or aircraft. The framework may also inspire the RL community, for solving the target-oriented tasks, e.g. collaborative navigation, Predator-prey. If our method fails, the targets would be all out of views of sensors. So, maybe a rule-based alternate plan is needed for unexpected conditions. We reset the training environment randomly to leverage biases in the data for better generalization.",139 | |
Simultaneously Learning Stochastic and Adversarial Episodic MDPs with Known Transition,https://proceedings.neurips.cc/paper/2020/file/c0f971d8cd24364f2029fcb9ac7b71f5-Paper.pdf,"This work is mostly theoretical, with no negative outcomes. Researchers working on theoretical aspects of online learning, bandit problems, and reinforcement learning (RL) may benefit from our results. Although our algorithm deals with the tabular setting and is not directly applicable to common RL applications with a large state and action space, it sheds light on how to increase robustness of a learning algorithm while adapting to specific instances, and serves as an important step towards developing more practical, adaptive, and robust RL algorithms, which in the long run might find its applications in the real world.",140 | |
Unsupervised Representation Learning by Invariance Propagation,https://proceedings.neurips.cc/paper/2020/file/23af4b45f1e166141a790d1a3126e77a-Paper.pdf,"This work presents a novel unsupervised learning method, which effectively utilizes large numbers of unlabeled images to learn representation useful for a wide range of downstream tasks, such as image recognition, semi-supervised learning, object detection, etc. Without the labels annotated by humans, our method reduces the prejudice caused by human priors, which may guide the models to learn more intrinsic information. The learned representations may benefit robustness in many scenarios such as adversarial robustness, out-of-distribution detection, label corruptions, etc. What’s more, the unsupervised learning can be applied to autonomous learning in robotics. The robot can autonomously collect the data without specifically labelling it and achieve lifelong learning. There also exist some potential risks for our method. Unsupervised learning solely depends on the distribution of the data itself to discover the information. Therefore, the learned model may be vulnerable to data distributions. With biased dataset, the model is likely to learn incorrect causality information. For example, in the autonomous system, it is inevitable that the bias will be brought during the process of data collection due to the inherent constraints of the system. The model can also be easily attacked when the data used for training is contaminated intentionally. Additionally, since the learned representation can be used for a wide range of downstream tasks, it should be guaranteed that they are used for beneficial purposes. We see the effectiveness and convenience of the proposed method, as well as the potential risks. To mitigate the risks associated with using unsupervised learning, we encourage the research to keep an eye on the distribution of the collected datasets and stop the use of the learned representations for harmful purposes.",141 | |
Deep Diffusion-Invariant Wasserstein Distributional Classification,https://proceedings.neurips.cc/paper/2020/file/ede7e2b6d13a41ddf9f4bdef84fdc737-Paper.pdf,"The proposed framework can considerably enhance conventional classification methods, of which performance is very sensitive to various types of perturbations ( e . g ., rotations, impulse noise, and down-scaling). The proposed Wasserstein distributional classifier represents both input data and target labels as probability measures and its diffusion invariant property prevents the classifier from being affected by severe perturbations. Hence, various research fields under real-world environments can benefit from exploiting our framework to obtain accurate classification results.",142 | |
Finding All ✏ -Good Arms in Stochastic Bandits,https://proceedings.neurips.cc/paper/2020/file/edf0320adc8658b25ca26be5351b6c4a-Paper.pdf,"The application of machine learning (ML) in domains such as advertising, biology, or medicine brings the possibility of utilizing large computational power and large datasets to solve new problems. It is tempting to use powerful, if not fully understood, ML tools to maximize scientific discovery. However, at times the gap between a tool’s theoretical guarantees and its practical performance can lead to sub-optimal behavior. This is especially true in adaptive data collection where misspecifying the model or desired output (e.g., “return the top k performing compounds” vs. “return all compounds with a potency about a given threshold”) may bias data collection and hinder post-hoc consideration of different objectives. In this paper we highlight several such instances in real-life data collection using multi-armed bandits where such a phenomenon occurs. We believe that the objective studied in this work, that of returning all arms whose mean is quantifiably near-best, more naturally aligns with practical objectives as diverse as finding funny captions to performing medical tests. We point out that methods from adaptive data collection and multi-armed bandits can also be used on content- recommendation platforms such as social media or news aggregator sites. In these scenarios, time and again, we have seen that recommendation systems can be greedy, attempting purely to maximize clickthrough with a long term effect of a less informed public. Adjacent to one of the main themes of this paper, we recommend that practitioners not just focus on the objective of recommendation for immediate profit maximization but rather keep track of a more holistic set of metrics. We are excited to see our work used in practical applications and believe it can have a major impact on driving the process of scientific discovery.",143 | |
The Generalization-Stability Tradeoff In Neural Network Pruning,https://proceedings.neurips.cc/paper/2020/file/ef2ee09ea9551de88bc11fd7eeea93b0-Paper.pdf,"This work focuses on resolving an apparent contradiction in the scientific understanding of the relationship between pruning and generalization performance. As such, we believe its primary impact will be on other researchers and it is unlikely to have substantial broader impacts. That said, understanding the mechanisms underlying our models is important for the safe deployment of such models in application domains. Our work takes a step in that direction, and we hope may help pave the way for further understanding.",144 | |
An Efficient Adversarial Attack for Tree Ensembles,https://proceedings.neurips.cc/paper/2020/file/ba3e9b6a519cfddc560b5d53210df1bd-Paper.pdf,"To the best of our knowledge, this is the first practical attack algorithm (in terms of both computational time and solution quality) that can be used to evaluate the robustness of tree ensembles. The study of robustness training algorithms for tree ensemble models have been difficult due to the lack of attack tools to evaluate their robustness, and our method can serve as the benchmark tool for robustness evaluation (similar to FGSM, PGD and C&W attacks for neural networks) (Goodfellow et al., 2015; Madry et al., 2018; Carlini, Wagner, 2017) to stimulate the research in the robustness of tree ensembles.",145 | |
Learning to Execute Programs with Instruction Pointer Attention Graph Neural Networks,https://proceedings.neurips.cc/paper/2020/file/62326dc7c4f7b849d6f013ba46489d6c-Paper.pdf,"Our work introduces a novel neural network architecture better suited for program understanding tasks related to program executions. Lessons learned from this architecture will contribute to improved machine learning for program understanding and generation. We hope the broader impact of these improvements will be improved tools for software developers for the analysis and authoring of new source code. Machine learning for static analysis produces results with uncertainty, however. There is risk that these techniques will be incorporated into tools in a way that conveys greater certainty than is appropriate, and could lead to either developer errors or mistrust of the tools.",146 | |
Analytic Characterization of the Hessian in Shallow ReLU Models: A Tale of Symmetry,https://proceedings.neurips.cc/paper/2020/file/3a61ed715ee66c48bacf237fa7bb5289-Paper.pdf,"To the best of our knowledge, there are no ethical aspects or future societal consequences directly involved in our work.",147 | |
Removing Bias in Multi-modal Classifiers: Regularization by Maximizing Functional Entropies,https://proceedings.neurips.cc/paper/2020/file/20d749bc05f47d2bd3026ce457dcfd8e-Paper.pdf,"We study functional entropy based regularizers which enable classifiers to more uniformly benefit from available dataset modalities in multi-modal tasks. We think the proposed method will help to reduce biases that present-day classifiers exploit when being trained on data which contains modalities, some of which are easier to leverage than others. We think this research will have positive societal implications. With machine learning being used more widely, bias from various modalities has become ubiquitous. Minority groups are disadvantaged by present-day AI algorithms, which work very well for the average person but are not suitable for other groups. We provide two examples next: 1. It is widely believed that criminal risk scores are biased against minorities1 , and mathematical methods that reduce the bias in machine learning are desperately needed. In our work we show how our regularization allows to reduce the color modality effect in colored MNIST, which hopefully facilitates to reduce bias in deep nets. 2. Consider virtual assistants as another example: if pronunciation is not mainstream, replies of AI systems are less helpful. Consequently, current AI ignores parts of society. To conclude, we think the proposed research is a first step towards machine learning becoming more inclusive. 1 https://www.propublica.org/article/bias-in-criminal-risk-scores-ismathematically-inevitable-researchers-say",148 | |
Generalization error in high-dimensional perceptrons: Approaching Bayes error with convex optimization,https://proceedings.neurips.cc/paper/2020/file/8f4576ad85410442a74ee3a7683757b3-Paper.pdf,"Our work is theoretical in nature, and as such the potential societal consequence are difficult to foresee. We anticipate that deeper theoretical understanding of the functioning of machine learning systems will lead to their improvement in the long term.",149 | |
Global Convergence of Deep Networks with One Wide Layer Followed by Pyramidal Topology,https://proceedings.neurips.cc/paper/2020/file/8abfe8ac9ec214d68541fcb888c0b4c3-Paper.pdf,This work does not present any foreseeable societal consequence.,150 | |
Tackling the Objective Inconsistency Problem in Heterogeneous Federated Optimization,https://proceedings.neurips.cc/paper/2020/file/564127c03caab942e503ee6f810f54fd-Paper.pdf,"The future of machine learning lies in moving both data collection as well as model training to the edge. This nascent research field called federated learning considers a large number of resource- constrained devices such as cellphones or IoT sensors that collect training data from their environment. Due to limited communication capabilities as well as privacy concerns, these data cannot be directly sent over to the cloud. Instead, the nodes locally perform a few iterations of training and only send the resulting model to the cloud. In this paper, we develop a federated training algorithm that is system-aware (robust and adaptable to communication and computation variabilities by allowing heterogeneous local progress) and data-aware (can handle skews in the size and distribution of local training data by correcting model aggregation scheme). This research has the potential to democratize machine learning by transcending the current centralized machine learning framework. It will enable lightweight mobile devices to cooperatively train a common machine learning model while maintaining control of their training data.",151 | |
Deep Structural Causal Models for Tractable Counterfactual Inference,https://proceedings.neurips.cc/paper/2020/file/0987b8b338d6c90bbedd8631bc499221-Paper.pdf,"Causal inference can be applied to a wide range of applications, promising to provide a deeper understanding of the observed data and prevent the fitting of spurious correlations. Our research presents a methodological contribution to the causal literature proposing a framework that combines causal models and deep learning to facilitate modelling high-dimensional data. Because of the general applicability of deep learning and causal inference, our framework could have a broad impact of enabling fairer machine learning models explicitly modelling causal mechanisms, reducing spurious correlations and tackling statistical and societal biases. The resulting models offer better interpretability due to counterfactual explanations and could yield novel understanding through causal discovery. However, causal modelling relies on strong assumptions and cannot always unambiguously determine the true causal structure of observational data. It therefore is necessary to carefully consider and communicate the assumptions being made by the analyst. In this light, our methodology is susceptible to being used to wrongly claim the discovery of causal structures due to careless application or intentional misuse. Particularly, the use of ‘black-box’ components as causal mechanisms may exacerbate concerns about identifiability, already present even for simple linear models. Whereas deep causal models can be useful for deriving insights from data, we must be cautious about their use in consequential decision-making, such as in informing policies or in the context of healthcare.",152 | |
Towards Better Generalization of Adaptive Gradient Methods,https://proceedings.neurips.cc/paper/2020/file/08fb104b0f2f838f3ce2d2b3741a12c2-Paper.pdf,"We believe that our work stands in the line of several papers towards improving generalization and avoiding over-fitting. Indeed, the basic principle of our method is to fit any given model, in particular deep model, using an intermediate differentially-private mechanisms allowing the model to fit fresh samples while passing over the same batch of n observations. The impact of such work is straightforward and could avoid learning, and thus reproducing at testing phase, the bias existent in the training dataset.",153 | |
An Analysis of SVD for Deep Rotation Estimation,https://proceedings.neurips.cc/paper/2020/file/fec3392b0dc073244d38eba1feb8e6b7-Paper.pdf,"This work considers the a fundamental question of how to best represent 3D rotation matrices in neural networks. This is a core component of many 3D vision and robotics deep learning pipelines, so any broader impact will be determined by applications or research that integrate our proposal into their systems.",154 | |
Natural Policy Gradient Primal-Dual Method for Constrained Markov Decision Processes,https://proceedings.neurips.cc/paper/2020/file/5f7695debd8cde8db5abcb9f161b49ea-Paper.pdf,"Our development could be added to a growing literature of constrained Markov decision processes (CMDPs) in a broad area of safe reinforcement learning (safe RL). Not only aiming to maximize the total reward, but almost all real-world sequential decision-making applications must also take control of safety regarding cost, utility, error rate, or efficiency, e.g., autonomous driving, medical test, financial management, and space exploration. Handling these additional safety objectives leads to constrained decision-making problems. Our research could be used to provide an algorithmic solution for practitioners to solve such constrained problems with non-asymptotic convergence and optimality guarantees. Our methodology could be new knowledge for RL researchers on the direct policy search methods for solving infinite-horizon discounted CMDPs. The decision-making processes that build on our research could enjoy the flexibility of adding practical constraints and this would improve a large range of uses, e.g., autonomous systems, healthcare services, and financial and legal services. We may expect a broad range of societal implications and we list some of them as follows. The autonomous robotics could be deployed to hazard environments, e.g., forest fires or earthquakes, with added safety guarantees. This could accelerate rescuing while saving robotics. The discovery of medical treatments could be less risky by restraining the side effect. Thus the bias of treatments could be minimized effectively. The policymaker in government or enterprises could encourage economic productivity as much as they can but under law/environment/public health constraints. Overall, one could expect a lot of social welfare improvements supported by the uses of our research. However, applying any theory to practice has to care about assumption/model mismatches. For example, our theory is in favor of well-defined feasible problems. This usually requires domain knowledge to justify. We would suggest domain experts develop guidelines for assumption/model validation. We would also encourage further work to establish the generalizability to other settings. Another issue could be the bias on gender or race. Policy parametrization selected by biased policymakers may inherit those biases. We would also encourage research to understand and mitigate the biases.",155 | |
Optimal Algorithms for Stochastic Multi-Armed Bandits with Heavy Tailed Rewards,https://proceedings.neurips.cc/paper/2020/file/607bc9ebe4abfcd65181bfbef6252830-Paper.pdf,"Multi-armed bandits with heavy-tailed rewards cover a wide range of online learning problems such as online classification, adaptive control, adaptive recommendation system, and reinforcement learning. Thus, the proposed algorithm has the potential to solve such practical applications. Since the proposed method learns a given task in a short time, it may reduce economical costs or time consumption. On the contrary, if the proposed method will be applied to personalized service, fast adaptation can make a person easily addicted to the service. For example, if the recommendation system adapts to a person’s preference well, it can continuously recommend items that arouse personal interest and that can lead to addiction. Acknowledgements This work was supported by Institute of Information & communica- tions Technology Planning & Evaluation(IITP) grant funded by the Korea government(MSIT) (No.20200013360011001, Artificial Intelligence graduate school support(UNIST)) and (No. 2019-0- 01190, [SW Star Lab] Robot Learning: Efficient, Safe, and Socially-Acceptable Machine Learning).",156 | |
Robust Quantization: One Model to Rule Them All,https://proceedings.neurips.cc/paper/2020/file/3948ead63a9f2944218de038d8934305-Paper.pdf,"Deep neural networks take up tremendous amounts of energy, leaving a large carbon footprint. Quantization can improve energy efficiency of neural networks on both commodity GPUs and specialized accelerators. Robust quantization takes another step and create one model that can be deployed across many different inference chips avoiding the need to re-train it before deployment (i.e., reducing CO2 emissions associated with re-training).",157 | |
Learning Individually Inferred Communication for Multi-Agent Cooperation,https://proceedings.neurips.cc/paper/2020/file/fb2fcd534b0ff3bbed73cc51df620323-Paper.pdf,"The experimental results are encouraging in the sense that we demonstrate I2C is a promising method for dealing with targeted communication in multi-agent communication based on causal influence. It is not yet at the application stage, and does not have broader impact. However, this work learns one-to-one communication instead of one/all-to-all communication, making I2C more practical in real-world applications.",158 | |
Deep Automodulators,https://proceedings.neurips.cc/paper/2020/file/9df81829c4ebc9c427b9afe0438dce5a-Paper.pdf,"The presented line of work intends to shift the focus of generative models from random sample generation towards controlled semantic editing of existing inputs. In essence, the ultimate goal is to offer ‘knobs’ that allow content editing based on high-level features, and retrieving and combining desired characteristics based on examples. While we only consider images, the techniques can be extended to other data domains such as graphs and 3D structures. Ultimately, such research could reduce very complex design tasks into approachable ones and thus reduce dependency on experts. For instance, contrast an expert user of a photo editor or design software, carefully tuning details, with a layperson who simply finds images or designs with the desired characteristics and guiding the smart editor to selectively combine them. Leveling the playing field in such tasks will empower larger numbers of people to contribute to design, engineering and science, while also multiplying the effectiveness of the experts. The downside of such empowerment will, of course, include the threats of deepfakes and spread of misinformation. Fortunately, public awareness of these abuses has been increasing rapidly. We attempt to convey the productive prospects of these technologies by also including image data sets with cars and bedrooms, while comparison with prior work motivates the focus on face images.",159 | |
Recurrent Quantum Neural Networks,https://proceedings.neurips.cc/paper/2020/file/0ec96be397dd6d3cf2fecb4a2d627c1c-Paper.pdf,"Without doubt, existing recurrent models—even simple RNNs—outclass the proposed QRNN archi- tecture in this paper in real-world learning tasks. In part, this is because we cannot easily simulate a large number of qubits on classical hardware: the memory requirements necessarily grow expo- nentially in the size of the workspace, for instance, which limits the number of parameters we can introduce in our model—on a quantum computer this overhead would vanish, resulting in a linear execution time in the circuit depth. What should nevertheless come as a surprise is that the model does perform relatively well on non- trivial tasks such as the ones presented here, in particular given the small number of qubits (usually between 8 and 12) that we utilised. As qubit counts in real-world devices are severely limited—and likely will be for the foreseeable future—learning algorithms with tame system requirements will certainly hold an advantage. Moreover, while we motivate the topology of the presented QRNN cell given in fig. 3 by the action of its different stages (writing the input; work; writing the output), and while the resulting circuits are far more structured than existing VQE setups, our architecture is still simplistic as compared to the various components of an RNN, let alone an LSTM. In all likelihood, a more specialized circuit structure (such as going from an RNN to an LSTM) will outperform the “simple” quantum recurrent network presented herein. Beyond the exploratory aspect of our work, our main insights are twofold. On the classical side—as discussed in the introduction—we present an architecture which can run on current hardware and ML implementations such as pytorch; and which is a candidate parametrization for unitary recurrent models that hold promise in circumventing gradient degradation for very long sequence lengths. On the quantum side, we significantly advance the field of variational circuits for quantum machine learning tasks; allowing ingestion of data of more than a few bits of size; demonstrate that models with large parameter counts can indeed be evaluated and trained; and that classical baselines such as MNIST classification are, indeed, within reach when using a more sophisticated model. Finally, our work is the first recurrent and entirely quantum neural network presented to date. Vari- ants of it might find application in conjunction with other quantum machine learning algorithms, such as quantum beam search [BSP19] in the context of language modelling. With a more near-term focus in mind, modelling the evolution of quantum systems with noisy dynamics is a task currently addressed using classical recurrent models [Flu+20]. Due to the intrinsic capability of a QRNN to keep track of a quantum state it holds promise to better capture the exponentially-growing phase space dimension of the system to be modelled.",160 | |
Robustness of Bayesian Neural Networks to Gradient-Based Attacks,https://proceedings.neurips.cc/paper/2020/file/b3f61131b6eceeb2b14835fa648a48ff-Paper.pdf,"This work is a theoretical investigation in the large data limit of vulnerability of Bayesian Neural Networks to gradient-based attacks. The main result is that, in this limit, BNNs are not vulnerable to such attacks, as the input gradient vanishes in expectation. This advancement provides a theoretically- provable rational for selecting BNNs in applications where there is concern about attackers performing fast, gradient-based attacks. However, it does not provide any guarantee on the actual safety of BNNs trained on a fi nite amount of data. Our work may positively bene fi t the study of adversarial robustness for BNNs and the investigation of properties that make these networks less vulnerable than deterministic ones. These features could then potentially be transferred to other network paradigms and lead to greater robustness of machine learning algorithms in general. However, there may still exist different attacks leading BNNs to misclassi fi cations and our contribution does not provide any defence technique against them. In the last few years adversarial examples have presented a major hurdle to the adoption of AI systems in any security related fi eld, whose applications go from self-driving vehicles to medical diagnoses. Machine learning algorithms show remarkable performance and generalization capabilities, but they also manifest weaknesses that are not consistent with human understanding of the world. Ultimately, the lack of knowledge about the difference between human and machine interpretation of reality leads to an issue of public trust. The development of procedures that are robust to changes in the output and that represent calibrated uncertainty, would inherently be more trust-worthy and allow for wide-spread adoption of deep learning in safety and security critical tasks.",161 | |
Flexible mean field variational inference using mixtures of non-overlapping exponential families,https://proceedings.neurips.cc/paper/2020/file/e3a54649aeec04cf1c13907bc6c5c8aa-Paper.pdf,"The primary contribution of this paper is theoretical and so the broader societal impact depends on how the theorems are used. The polygenic score application has the possibility to improve the overall quality of healthcare, but because the majority of GWAS are performed on individuals of European ancestries, PGSs are more accurate for individuals from those ancestry groups, potentially exacerbating health disparities between individuals of different ancestries as PGSs see clinical use [39]. The methods presented here are equally applicable to GWAS data collected from any ancestry group, however, and so efforts to diversify genetic data will ameliorate performance differences across ancestry groups. PGSs used for some traits such as sexual orientation [18], educational attainment [22], or stigmatized psychiatric disorders [14] raise thorny ethical considerations, especially when the application of such PGSs could enable genetic discrimination or fuel dangerous public misconceptions about the genetic basis of such traits [44]. On the other hand, PGSs applied to diseases have the potential to improve health outcomes and so if used responsibly could provide tremendous benefit to society.",162 | |
Position-based Scaled Gradient for Model Quantization and Pruning,https://proceedings.neurips.cc/paper/2020/file/eb1e78328c46506b46a4ac4a1e378b91-Paper.pdf,"PSG is a fundamental method of scaling each gradient component differently depending on the position of a weight vector. This technique can replace conventional gradient in any applications that require different treatment of specific locations in the parameter space. As shown in the paper, the easiest conceivable applications would be quantization and pruning where a definite preference for specific weight forms exists. These model compression techinques are at the heart of the fast and lightweight deployment of any deep learning algorithms and thus, PSG can make a huge impact in the related industry. As another potentially related research topic, PSG has a chance to be utilized in the optimization area such as the integer programming and the combinatorial optimization acting as a tool in optimizing a continuous surrogate of an objective function in a discrete space.",163 | |
Efficient Generation of Structured Objects with Constrained Adversarial Networks,https://proceedings.neurips.cc/paper/2020/file/a87c11b9100c608b7f8e98cfa316ff7b-Paper.pdf,"Broadly speaking, this work aims at improving the reliability of structures / configurations generated via machine learning approaches. This can have a strong impact on a wide range of research fields and application domains, from drug design and protein engineering to layout synthesis and urban planning. Indeed, the lack of reliability of machine-generated outcomes is one of main obstacles to a wider adoption of machine learning technology in our societies. On the other hand, there is a risk of overestimating the reliability of the outputs of CANs, which are only guaranteed to satisfy constraints in expectation. For applications in which invalid structures should be avoided, like safety-critical applications, the objects output by CANs should always be validated before use. From an artificial intelligence perspective, this work supports the line of thought that in order to overcome the current limitations of AI there is a need for combining machine learning and especially deep learning technology with approaches from knowledge representation and automated reasoning, and that principled ways to achieve this integration should be pursued.",164 | |
Time-Reversal Symmetric ODE Network,https://proceedings.neurips.cc/paper/2020/file/db8419f41d890df802dca330e6284952-Paper.pdf,"In this paper, we introduce a neural network model that regularized by a physics-originated inductive bias, the symmetry. Our proposed model can be used to identify and predict unknown dynamics of physical systems. In what follows, we summarize the expected broader impacts of our research from two perspectives. Use for current real world applications. Predicting dynamics plays a important role in various practical applications, e.g., robotic manipulation [16], autonomous driving [25], and other trajectory planning tasks. For these tasks, the predictive models should be highly reliable to prevent human and material losses due to accidents. Our propose model have a potential to satisfy this high standard on reliability, considering its robustness and efficiency (see Figure 4 as an example). First step for fundamental inductive bias. According to the CPT theorem in quantum field theory, the CPT symmetry, which means the invariance under the combined transformation of charge conjugate (C), parity transformation (P), and time reversal (T), exactly holds for all phenomena of physics [21]. Thus, the CPT symmetry is a fundamental rule of nature: that means, it is a fundamental inductive bias of deep learning models for natural science. However, this symmetry-based bias has been unnoticed previously. We study one of the fundamental symmetry, the time-reversal symmetry in classical mechanics, as a proof-of-concept in this paper. We expect our finding can encourage researchers to focus on the fundamental bias of nature and extend the research from classical to quantum, and from time-reversal symmetry to CPT symmetry. Our work would also contribute to bring together experts in physics and deep learning in order to stimulate interaction and to begin exploring how deep learning can shed light on physics.",165 | |
Online Planning with Lookahead Policies,https://proceedings.neurips.cc/paper/2020/file/a18aa23ee676d7f5ffb34cf16df3e08c-Paper.pdf,"Online planning algorithms, such as A ∗ and RTDP, have been extensively studied and applied in AI for well over two decades. Our work quantifies the benefits of using lookahead-policies in this class of algorithms. Although lookahead-policies have also been widely used in online planning algorithms, their theoretical justification was lacking. Our study sheds light on the benefits of lookahead-policies. Moreover, the results we provide in this paper suggest improved ways for applying lookahead-policies in online planning with benefits when dealing with various types of approximations. This work opens up the room for practitioners to improve their algorithms and base lookahead policies on solid theoretical ground. Acknowledgements. We thank the reviewers for their helpful comments and feedback. Y.E. and S.M. were partially supported by the ISF under contract 2199/20.",166 | |
Co-exposure maximization in online social networks,https://proceedings.neurips.cc/paper/2020/file/212ab20dbdf4191cbcdcf015511783f4-Paper.pdf,"Our work addresses the problem of maximizing co-exposure of information in online social networks via viral-marketing strategies. We are interested in situations where opposing campaigns are prop- agated in different parts of a social network, with users in one side not being aware of the content and arguments seen on the other side. Although, the focus of our work is mainly theoretical, and a number of modeling considerations has been stripped out for the sake of mathematical rigor, applying this kind of ideas in practice may have significant impact towards reducing polarization on societal issues, and offering users a more balanced news diet and the possibility to participate in constructive deliberation. On the other hand, one needs to be careful how our framework will be applied in practice. One potential source of misuse is when misinformation or disinformation is offered to counter true facts. Here we assume that this aspect is orthogonal to our approach, and that the social-network platform needs to mitigate this danger by providing mechanisms of information validation, fact checking, and ethical compliance of the content before allowing it to circulate in the network. Another issue is that, users often do not understand why they see a particular item in their feed; the system content-filtering and prioritization algorithm is opaque to them. In the context of our proposal, since we are suggesting to make content recommendations to selected users, it is important that transparent mechanisms are in place for the users to opt in participating in such features, to understand why they receive these recommendations, and in general, to be able to control their content.",167 | |
Variational Bayesian Monte Carlo with Noisy Likelihoods,https://proceedings.neurips.cc/paper/2020/file/5d40954183d62a82257835477ccad3d2-Paper.pdf,"We believe this work has the potential to lead to net-positive improvements in the research community and more broadly in society at large. First, this paper makes Bayesian inference accessible to non- cheap models with noisy log-likelihoods, allowing more researchers to express uncertainty about their models and model parameters of interest in a principled way; with all the advantages of proper uncertainty quantification [2]. Second, with the energy consumption of computing facilities growing incessantly every hour, it is our duty towards the environment to look for ways to reduce the carbon footprint of our algorithms [52]. In particular, traditional methods for approximate Bayesian inference can be extremely sample-inefficient. The ‘smart’ sample-efficiency of VBMC can save a considerable amount of resources when model evaluations are computationally expensive. Failures of VBMC can return largely incorrect posteriors and values of the model evidence, which if taken at face value could lead to wrong conclusions. This failure mode is not unique to VBMC , but a common problem of all approximate inference techniques (e.g., MCMC or variational inference [2, 53]). VBMC returns uncertainty on its estimate and comes with a set of diagnostic functions which can help identify issues. Still, we recommend the user to follow standard good practices for validation of results, such as posterior predictive checks, or comparing results from different runs. Finally, in terms of ethical aspects, our method – like any general, black-box inference technique – will reflect (or amplify) the explicit and implicit biases present in the models and in the data, especially with insufficient data [54]. Thus, we encourage researchers in potentially sensitive domains to explicitly think about ethical issues and consequences of the models and data they are using.",168 | |
CircleGAN: Generative Adversarial Learning across Spherical Circles,https://proceedings.neurips.cc/paper/2020/file/f14bc21be7eaeed046fed206a492e652-Paper.pdf,"This work addresses the problem of generative modeling and adversarial learning, which is a crucial topic in machine learning and artificial intelligence; b) the proposed technique is generic and does not have any direct negative impact on society; c) the proposed model improves sample diversity, thus contributing to reducing biases in generated data samples.",169 | |
MetaSDF: Meta-learning Signed Distance Functions,https://proceedings.neurips.cc/paper/2020/file/731c83db8d2ff01bdc000083fd3c3740-Paper.pdf,"Emerging neural implicit representations are a powerful tool for representing signals, such as 3D shape and appearance. Generalizing across these neural implicit representations requires efficient approaches to learning distributions over functions. We have shown that gradient-based meta-learning approaches are one promising avenue to tackling this problem. As a result, the proposed approach may be part of the backbone of this emerging neural signal representation strategy. As a powerful representation of natural signals, such neural implicits may in the future be used for the generation and manipulation of signals, which may pose challenges similar to those posed by generative adversarial models today.",170 | |
Focus of Attention Improves Information Transfer in Visual Features,https://proceedings.neurips.cc/paper/2020/file/fc2dc7d20994a777cfd5e6de734fe254-Paper.pdf,"Our work is a foundational study. We believe that there are neither ethical aspects nor future societal consequences that should be discussed at the current state of our work. Unsupervised criteria paired with a spatio-temporal filtering can potentially lead to the development of more robust features to describe visual information. In particular, the outcomes of this work could help in designing improved neural models, capable of extracting relevant information from a continuous video stream, from the same areas that attract the human gaze.",171 | |
Differentiable Neural Architecture Search in Equivalent Space with Exploration Enhancement,https://proceedings.neurips.cc/paper/2020/file/9a96a2c73c0d477ff2a6da3bf538f4f4-Paper.pdf,"Automatic Machine Learning (AutoML) aims to build a better machine learning model in a data- driven and automated manner, compensating for the lack of machine learning experts and lowering the threshold of various areas of machine learning to help all the amateurs to use machine learning without any hassle. These days, many companies, like Google and Facebook, are using AutoML to build machine learning models for handling different businesses automatically. They especially leverage the AutoML to automatically build Deep Neural Networks for solving various tasks, including computer vision, natural language processing, autonomous driving, and so on. AutoML is an up-and-coming tool to take advantage of the extracted data to find the solutions automatically. This paper focuses on the Neural Architecture Search (NAS) of AutoML, and it is the first attempt to enhance the intelligent exploration of differentiable One-Shot NAS in the latent space. The experimental results demonstrate the importance of introducing uncertainty into neural architecture search, and point out a promising research direction in the NAS community. It is worth notice that NAS is in its infancy, and it is still very challenging to use it to complete automation of a specific business function like marketing analytics, customer behavior, or other customer analytics.",172 | |
Latent Bandits Revisited,https://proceedings.neurips.cc/paper/2020/file/9b7c8d13e4b2f08895fb7bcead930b46-Paper.pdf,"Our work develops improved algorithms for bandit-style exploration in a very general and abstract sense. We have demonstrated its ability to increase the rate at which interactive systems identify user latent state to improve long-term impact on user reward (e.g., engagement in a recommender system). Our work is agnostic to the form of the reward. We are strongly motivated by improving user positive engagement with interactive systems (e.g., by identifying user interests or preferences in a recommender system). However, other forms of reward that are unaligned with a user’s best interests could be used—our methods do not propose specific reward models. That said, our work has no social implications (welfare, fairness, privacy, etc.) beyond those already at play in the interactive system to which our methods might be applied.",173 | |
Memory Based Trajectory-conditioned Policies for Learning from Sparse Rewards,https://proceedings.neurips.cc/paper/2020/file/2df45244f09369e16ea3f9117ca45157-Paper.pdf,"DTSIL is likely to be useful in real-world RL applications, such as robotics-related tasks. Compared with previous exploration methods, DTSIL shows obvious advantages when the task requires rea- soning over long-horizon and the feedback from environment is sparse. We believe RL researchers and practitioners can benefit from DTSIL to solve RL application problems requiring efficient explo- ration. Especially, DTSIL helps avoid the cost of collecting human demonstration and the manual engineering burden of designing complicated reward functions. Also, as we discussed in Sec. 5, when deployed for more problems in the future, DTSIL has a good potential to perform robustly and avoid local optima in various stochastic environments when combined with other state representation learning approaches. DTSIL in its current form is applied to robotics tasks in the simulated environments. And it likely contributes to real robots in solving hard-exploration tasks in the future. Advanced techniques in robotics make it possible to eliminate repetitive, time-consuming, or dangerous tasks for human workers and might bring positive societal impacts. For example, the advancement in household robots will help reduce the cost for home care and benefit people with disability or older adults who needs personalized care for a long time. However, it might cause negative consequences such as large-scale job disruptions at the same time. Thus, proper public policy is required to reduce the social friction. On the other hand, RL method without much reward shaping runs the risk of taking a step that is harmful for the environments. This generic issue faced by most RL methods is also applicable to DTSIL. To mitigate this issue, given any specific domain, one simple solution is to apply a constraint on the state space that we are interested to reach during exploration. DTSIL is complementary to the mechanisms to restrict the state space or action space. More principled way to ensure safety during exploration is a future work. In addition to AI safety, another common concern for most RL algorithms is the memory and computational cost. In the supplementary material we discuss how to control the size of the memory for DTSIL and report the cost. Empirically DTSIL provides ideas for solving various hard-exploration tasks with a reasonable computation cost.",174 | |
Automatically Learning Compact Quality-aware Surrogates for Optimization Problems,https://proceedings.neurips.cc/paper/2020/file/6d0c932802f6953f70eb20931645fa40-Paper.pdf,"End-to-end approaches can perform better in data-poor settings, improving access to the benefits of machine learning systems for communities that are resource constrained. Standard two-stage approaches typically requires enough data to learn well across the data distribution. In many domains focused on social impact such as wildlife conservation, limited data can be collected and the resources are also very limited. End-to-end learning is usually more favorable than two-stage approach under these circumstances; it can achieve higher quality results despite data limitations compared to two- stage approaches. This paper reduces the computational costs of end-to-end learning and increases the performance benefits. But such performance improvements may come with a cost in transferability because the end-to-end learning task is specialized towards particular decisions, whereas a prediction-only model from the two-stage predict-then-optimize framework might be used for different decision making tasks in the same domain. Thus, the predictive model trained for a particular decision-making task in the end-to- end framework is not necessarily as interpretable or transferable as a model trained for prediction only. For real-world tasks, there would need to be careful analysis of cost-benefit of applying an end-to-end approach vis-a-vis a two-stage approach particularly if issues of interpretability and transferrability are critical; in some domains these may be crucial. Further research is required to improve upon these issues in the end-to-end learning approach.",175 | |
VIME: Extending the Success of Self- and Semi-supervised Learning to Tabular Domain,https://proceedings.neurips.cc/paper/2020/file/7d97667a3e056acab9aaf653807b4a03-Paper.pdf,"Tabular data is the most common data type in the real-world. Most databases include tabular data such as demographic information in medical and finance datasets and SNPs in genomic datasets. However, the tremendous successes in deep learning (especially in image and language domains) has not yet been fully extended to the tabular domain. Still, in the tabular domain, ensembles of decision trees achieve the state-of-the-art performance. If we can efficiently extend the successful deep learning methodologies from images and language to tabular data, the application of machine learning in the real-world can be greatly extended. This paper takes a step in this direction for self- and semi-supervised learning frameworks which recently have achieved significant successes in images and language. In addition, the proposed tabular data augmentation and representation learning methodologies can be utilized in various fields such as tabular data encoding, balancing the labels of tabular data, and missing data imputation.",176 | |
When and How to Lift the Lockdown? Global COVID-19 Scenario Analysis and Policy Assessment using Compartmental Gaussian Processes,https://proceedings.neurips.cc/paper/2020/file/79a3308b13cd31f096d8a4a34f96b66b-Paper.pdf,"This paper addresses a timely decision-making problem that faces governments and authorities around the world during these exceptional times. Decisions informed by our model may affect the daily lives of millions of people around the world during the upcoming months. We believe that now is the time for research on machine learning for clinical and public health applications to contribute to the efforts humanity exerts to handle the current crisis — we hope that our model plays a role in informing the public and governments on the consequences of policies and social behavior on public health. We are currently in the phase of communicating the projections of our model with official public health services in multiple countries, including developing countries.",177 | |
Unbalanced Sobolev Descent,https://proceedings.neurips.cc/paper/2020/file/c5f5c23be1b71adb51ea9dc8e9d444a8-Paper.pdf,"Our work provides a practical particle descent algorithm that comes with a formal convergence proof and theoretically guaranteed acceleration over previous competing algorithms. Moreover, our algorithm can naturally handle situations where the objects of the descent are particles sampled from a source distribution descending towards a target distribution with different mass. The type of applications that this enables range from theoretically principled modeling of biological growths processes (like tumor growth) and developmental processes (like the differentiation of cells in their gene expression space), to faster numerical simulation of advection-reaction systems. Since our advance is mainly theoretical and algorithmic (besides the empirical demonstrations), its implications are necessarily tied to the utilization for which it is being deployed. Beside the applications that we mentioned, particle descent algorithms like ours have been proposed as a paradigm to characterize and study the dynamics of Generative Adversarial Network (GANs) training. As such, they could indirectly contribute to the risks associated with the nefarious uses of GANs such as deepfakes. On the other hand, by providing a tools to possibly analyze and better understand GANs, our theoretical results might serve as the basis for mitigating their abuse.",178 | |
Neural Topographic Factor Analysis for fMRI Data,https://proceedings.neurips.cc/paper/2020/file/8c3c27ac7d298331a1bdfd0a5e8703d3-Paper.pdf,"While this paper reports on NTFA in terms of its characteristics as a general-purpose machine learning method for the analysis of neuroimaging data, we envision downstream impacts in the context of specific neuroscientific research questions. There is a need in neuroscience research to develop formal computational approaches that capture individual differences in neural function. The embedding space yields a simple, visualizable model to inspect individual differences that has the potential to, at least in a qualitative manner, provide insights into fundamental questions in cognitive neuroscience. One such question is whether neural responses to stimuli are shared across individuals, vary by pre-defined participants groups (e.g. depressed vs. non-depressed participants), or are unique to participants or subgroups (e.g. as suggested by calls for “precision medicine” approaches). Going forward, we will use our pilot data to address whether the neural basis of fear, for example, is shared across individuals and situations (i.e. there is a single “biomarker” or “neural signature” for fear), or as we expect, whether it varies by person or situation (suggesting that biomarkers for fear are idiographic) [Satpute and Lindquist, 2019]. With further developments, we plan to perform more extensive neuroimaging experiments that probe individual variation in additional fMRI datasets including in house datasets and publicly available datasets. Our hope is that the work presented in this paper will form a basis for developing probabilistic factor-analysis models with structured priors that will allow testing and development of specific neuroscientific hypotheses regarding individual variation in the functional neural organization of psychological processes.",179 | |
On the Almost Sure Convergence of Stochastic Gradient Descent in Non-Convex Problems,https://proceedings.neurips.cc/paper/2020/file/0cb5ebb1b34ec343dfe135db691e4a85-Paper.pdf,This work does not present any foreseeable societal consequence.,180 | |
Adversarial robustness via robust low rank representations,https://proceedings.neurips.cc/paper/2020/file/837a7924b8c0aa866e41b2721f66135c-Paper.pdf,"Our work provides efficient algorithms for training neural networks with certified robustness guarantees. This can have significant positive societal impact considering the importance of protecting AI systems against malicious adversaries. A classifier with certified robustness guarantees can give a sense of security to the end user. On the other hand, our methods achieve robustness at the expense of a small loss in natural test accuracy as compared to non- adversarial training. It is unclear how this loss in accuracy is distributed across the population. This could have a negative societal impact if the loss in accuracy is disproportionately on data points/individuals belonging to a specific demographic group based on say race or gender. That said, robustness to perturbations also corresponds to a natural notion of individual fairness since data points with similar features need to be treated similarly by a robust classifier. Hence, a careful study must be done to understand these effects before a large scale practical deployment of systems based on our work.",181 | |
FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence,https://proceedings.neurips.cc/paper/2020/file/06964dce9addb1c5cb5d6e3d9838f733-Paper.pdf,"FixMatch helps democratize machine learning in two ways: first, its simplicity makes it available to a wider audience, and second, its accuracy with only a few labels means that it can be applied to domains where previously machine learning was not feasible. The flip side of democratization of machine learning research is that it becomes easy for both good and bad actors to apply. We hope that this ability will be used for good—for example, obtaining medical scans is often far cheaper than paying an expert doctor to label every image. However, it is possible that more advanced techniques for semi-supervised learning will allow for more advanced surveillance: for example, the efficacy of our one-shot classification might allow for more accurate person identification from a few images. Broadly speaking, any progress on semi-supervised learning will have these same consequences.",182 | |
Improving Policy-Constrained Kidney Exchange via Pre-Screening,https://proceedings.neurips.cc/paper/2020/file/1bda4c789c38754f639a376716c5859f-Paper.pdf,"This work lives within the broader context of kidney exchange research. For clarity, we separate our broader impacts into two sections: first we discuss the impact of kidney exchange in general; then we discuss our work in particular, within the context of kidney exchange research and practice. Impacts of Kidney Exchange Patients with end-stage renal disease have only two options: receive a transplant, or undergo dialysis once every few days, for the rest of their lives. In many countries (including the US), these patients register for a deceased donor waiting list–and it can be months or years before they receive a transplant. Many of these patients have a friend or relative willing to donate a kidney, however many patients are incompatible with their corresponding donor. Kidney exchange allows patients to “swap” their incompatible donor, in order to find a higher-quality match, more quickly than a waiting list. Transplants allow patients a higher quality of life, and cost far less, than lifelong dialysis. About 10% of kidney transplants in the US are facilitated by an exchange. Finding the “most efficient” matching of kidney donors to patients is a (computationally) hard problem, which cannot be solved by hand in most cases. For this reason many fielded exchanges use algorithms to quickly find an efficient matching of patients and donors. Many researchers study kidney exchange from an algorithmic perspective, often with the goal of improving the number or quality of transplants facilitated by exchanges. Indeed, this is the purpose of our paper. Impacts of Our Work In this paper we investigate the impact of pre-screening certain potential transplants (edge) in an exchange, prior to constructing the final patient-donor matching. To our knowledge, some modern fielded exchanges pre-screen potential transplants in an ad-hoc manner; meaning they do not consider the impacts of pre-screening on the final matching. We propose methods to estimate the importance of pre-screening each edge, as measured by the change in the overall number and quality of matched transplants.7 Importantly, our methods do not require a change in matching policy; instead, they indicate to policymakers which potential transplants are important to pre-screen, and which are not. The impacts of our contributions are summarized below: Some potential transplants cannot be matched, because they cannot participate in a “legal” cyclical or chain-like swap (according to the exchange matching policy). Accordingly, there is no “value” gained by pre-screening these transplants; our methods will identify these potential transplants, and will recommend that they not be pre-screened. Pre-screening requires doctors to spend valuable time reviewing potential donors; removing these unmatchable transplants from pre-screening will allow doctors to focus only on transplants that are relevant to the current exchange pool. Some transplants are more important to pre-screen than others, and our methods help identify which are most important for the final matching. We estimate the value pre-screening of each transplant by simulating the exchange matching policy in the case that the pre-screened edge is pre-accepted, and in the case that it is pre-refused. To estimate the value of pre-screening each transplant, we need to know (a) the likelihood that each transplant is pre-accepted and pre-refused, and (b) the likelihood that each planned transplant fails for any reason, after being matched. These likelihoods are used as input to our methods, and they can influence the estimated value of pre-screening different transplants. Importantly, it may not be desirable to calculate these likelihoods for each potential transplant (e.g., using data from the past). For example if a patient is especially sick, we may estimate that any potential transplant involving this patient is very likely to fail prior to transplantation (e.g., because the patient is to ill to undergo an operation). In this case, our methods may estimate that all potential transplants involving this patient have very low “value”, and therefore recommend that these transplants should not be pre-screened. One way to avoid this issue is to use the same likelihood estimates for all transplants. To estimate the impact of our methods (and how they depend on the assumed likelihoods, see above), we recommend using extensive modeling of different pre-screening scenarios before deploying our methods in a fielded exchange. This is important for several reasons: first, exchange programs cannot always require that doctors pre-screen potential transplants prior to matching. Since we cannot be sure which transplants will be pre-screened and which will not, simulations should be run to evaluate each possible scenario. Second, theoretical analysis shows that pre-screening transplants can—in the worst case—negatively impact the final outcome. While this worst-case outcome is possible, our computational experiments show that it is very unlikely; this can be addressed further with mode experiments tailored to a particular exchange program. 7 Quality and quantity of transplants is measured by transplant weight, a numerical representation of transplant quality (e.g., see UNOS/OPTN Policy 13 regarding KPD prioritization points https://optn.transplant. hrsa.gov/media/1200/optn_policies.pdf).",183 | |
Fairness in Streaming Submodular Maximization: Algorithms and Hardness,https://proceedings.neurips.cc/paper/2020/file/9d752cb08ef466fc480fba981cfa44a1-Paper.pdf,"Several recent studies have shown that automated data-driven methods can unintentionally lead to bias and discrimination [35, 56, 5, 10, 52]. Our proposed algorithms will help guard against these issues in data summarization tasks arising in various settings – from electing a parliament, over selecting individuals to influence for an outreach program, to selecting content in search engines and news feeds. As expected, fairness does come at the cost of a small loss in utility value, as observed in Section 6. It is worth noting that this “price of fairness” (i.e., the decrease in optimal objective value when fairness constraints are added) should not be interpreted as fairness leading to a less desirable outcome, but rather as a trade-off between two valuable metrics: the original application-dependent utility, and the fairness utility. Our algorithms ensure solutions achieving a close to optimal trade-off. Finally, despite the generality of the fairness notion we consider, it does not capture certain other notions of fairness considered in the literature (see e.g., [18, 58]). No universal metric of fairness exists. The question of which fairness notion to employ is an active area of research, and will be application dependent.",184 | |
Faster Wasserstein Distance Estimation with the Sinkhorn Divergence,https://proceedings.neurips.cc/paper/2020/file/17f98ddf040204eda0af36a108cbdea4-Paper.pdf,"Broader impact statement does not apply for this paper, which is of theoretical nature.",185 | |
ContraGAN: Contrastive Learning for Conditional Image Generation,https://proceedings.neurips.cc/paper/2020/file/f490c742cd8318b8ee6dca10af2a163f-Paper.pdf,"We proposed a new conditional image generation model that can synthesize more realistic and diverse images. Our work can contribute to image-to-image translations [50, 51], generating realistic human faces [52, 53, 54], or any task that utilizes adversarial training. Since conditional GANs can expand to various image processing applications and can learn the representations of high-dimensional datasets, scientists can enhance the quality of astronomical images [55, 56], design complex architectured materials [57], and efficiently search chemical space for developing materials [58]. We can do so many beneficial tasks with conditional GANs, but we should be concerned that conditional GANs can be used for deepfake techniques [59]. Modern generative models can synthesize realistic images, making it more difficult to distinguish between real and fake. This can trigger sexual harassment [60], fake news [61], and even security issues of face recognition systems [62]. To avoid improper use of conditional GANs, we need to be aware of generative models’ strengths and weaknesses. Besides, it would be good to study the general characteristics of generated samples [63] and how we can distinguish fake images from unknown generative models [64, 65, 66].",186 | |
Adaptive Gradient Quantization for Data-Parallel SGD,https://proceedings.neurips.cc/paper/2020/file/20b5e1cf8694af7a3c1ba4a87f073021-Paper.pdf,"This work provides additional understanding of statistical behaviour of deep machine learning models. We aim to train deep models using popular SGD algorithm as fast as possible without compromising learning outcome. As the amount of data gathered through web and a plethora of sensors deployed everywhere (e.g., IoT applications) is drastically increasing, the design of efficient machine learning algorithms that are capable of processing large-scale data in a reasonable time can improve everyone’s quality of life. Our compression schemes can be used in Federated Learning settings, where a deep model is trained on data distributed among multiple owners without exposing that data. Developing privacy-preserving learning algorithms is an integral part of responsible and ethical AI. However, the long-term impacts of our schemes may depend on how machine learning is used in society.",187 | |
Adversarial Attacks on Deep Graph Matching,https://proceedings.neurips.cc/paper/2020/file/ef126722e64e98d1c33933783e52eafc-Paper.pdf,"Graph data are ubiquitous in the real world, ranging from biological, communication, and transporta- tion graphs, to knowledge, social, and collaborative networks. Many real-world graphs are essentially crowdsourced projects, such as social and knowledge networks, where information and knowledge are produced by internet users who came to the sites. Thus, the quality of crowdsourced graph data is not stable, depending on human knowledge and expertise. In addition, it is well known that the openness of crowdsourced websites makes them vulnerable to malicious behaviors of interested parties to gain some level of control of the websites and steal users’ sensitive information, or deliberately influence public opinion by injecting misleading information and knowledge into crowdsourced graphs. Graph matching is one of the most important research topics in the graph domain, which aims to match the same entities (i.e., nodes) across two or more graphs [91, 98, 43, 46, 48, 72, 54, 105, 13, 75]. It has been widely applied to many real-world applications ranging from protein network matching in bioinformatics [33, 63], user account linking in different social networks [62, 51, 100, 37, 101, 21, 38], and knowledge translation in multilingual knowledge bases [87, 124], to geometric keypoint matching in computer vision [22]. Owing to the openness of crowdsourced graphs, more work is needed to analyze the vulnerability of graph matching under adversarial attacks and to future develop robust solutions that are readily applicable in production systems. A potential downside of this research is about the application of user account linking in different social networks due to the user privacy issues. Recent advances in differential privacy and privacy preserving graph analytics have shown the superior performance of protecting sensitive information about individuals in the datasets. Therefore, these techniques offer a great opportunity to integrate them into the vulnerability analysis of graph matching, for alleviating the user privacy threats.",188 | |
Stochastic Latent Actor-Critic: Deep Reinforcement Learning with a Latent Variable Model,https://proceedings.neurips.cc/paper/2020/file/08058bf500242562c0d031ff830ad094-Paper.pdf,"Despite the existence of automated robotic systems in controlled environments such as factories or labs, standard approaches to controlling systems still require precise and expensive sensor setups to monitor the relevant details of interest in the environment, such as the joint positions of a robot or pose information of all objects in the area. To instead be able to learn directly from the more ubiquitous and rich modality of vision would greatly advance the current state of our learning systems. Not only would this ability to learn directly from images preclude expensive real-world setups, but it would also remove the expensive need for human-engineering efforts in state estimation. While it would indeed be very beneficial for our learning systems to be able to learn directly from raw image observations, this introduces algorithm challenges of dealing with high-dimensional as well as partially observable inputs. In this paper, we study the use of explicitly learning latent representations to assist model-free reinforcement learning directly from raw, high-dimensional images. Standard end-to-end RL methods try to solve both representation learning and task learning together, and in practice, this leads to brittle solutions which are sensitive to hyperparameters but are also slow and inefficient. These challenges illustrate the predominant use of simulation in the deep RL community; we hope that with more efficient, stable, easy-to-use, and easy-to-train deep RL algorithms such as the one we propose in this work, we can help the field of deep RL to transition to more widespread use in real-world setups such as robotics. From a broader perspective, there are numerous use cases and areas of application where autonomous decision making agents can have positive effects in our society, from automating dangerous and undesirable tasks, to accelerating automation and economic efficiency of society. That being said, however, automated decision making systems do introduce safety concerns, further exacerbated by the lack of explainability when they do make mistakes. Although this work does not explicitly address safety concerns, we feel that it can be used in conjunction with levels of safety controllers to minimize negative impacts, while drawing on its powerful deep reinforcement learning roots to enable automated and robust tasks in the real world.",189 | |
How does Weight Correlation Affect the Generalisation Ability of Deep Neural Networks?,https://proceedings.neurips.cc/paper/2020/file/f48c04ffab49ff0e5d1176244fdfb65c-Paper.pdf,"Our findings sharpen the theoretical and practical aspects of generalisation, one of the most important topics in machine learning: considering whether a trained model can be used on unseen data. Our findings can increase our understanding of the generalisation ability of deep learning and engender a broad discussion and in-depth research on how to improve the performance of deep learning. This can unfold impact, first within the machine learning community and subsequently—through the impact of machine leaning—to other academic disciplines and the industrial sectors. Beyond the improvements that always come with better models, we also provide a better estimation of the generalisation error, which in turn leads to improved quality guarantees. This will enlarge the envelope of applications where deep neural networks can be used; not by much, maybe, but moving the goalposts of a vast field a little has a large effect. A different kind of impact is that (neuronal) correlation is a concept, which is well studied in neuroscience. Our results could therefore lead to follow-up research that re-visits the connection between deep neural networks and neuroscience concepts. Acknowledgement GJ is supported by a University of Liverpool PhD scholarship. SS is supported by the UK EPSRC project [EP/P020909/1], and XH is supported by the UK EPSRC projects [EP/R026173/1,EP/T026995/1]. Both XH and SS are supported by the UK Dstl project [TCMv2]. This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 956123. LijunZ is supported by the Guang- dong Science and Technology Department [Grant no. 2018B010107004], and NSFC [Grant Nos. 61761136011,61532019].",190 | |
Fully Dynamic Algorithm for Constrained Submodular Optimization,https://proceedings.neurips.cc/paper/2020/file/9715d04413f296eaf3c30c47cec3daa6-Paper.pdf,This work does not present any foreseeable societal consequence.,191 | |
Fine-Grained Dynamic Head for Object Detection,https://proceedings.neurips.cc/paper/2020/file/7f6caf1f0ba788cd7953d817724c2b6e-Paper.pdf,"Object detection is a fundamental task in the computer vision domain, which has already been applied to a wide range of practical applications. For instance, face recognition, robotics and autonomous driving heavily rely on object detection. Our method provides a new dimension for object detection by utilizing the fine-grained dynamic routing mechanism to improve performance and maintain low computational cost. Compared with hand-crafted or searched methods, ours does not need much time for manual design or machine search. Besides, the design philosophy of our fine-grained dynamic head could be further extended to many other computer vision tasks, e.g. , segmentation and video analysis.",192 | |
ConvBERT: Improving BERT with Span-based Dynamic Convolution,https://proceedings.neurips.cc/paper/2020/file/96da2f590cd7246bbde0051047b0d6f7-Paper.pdf,"Positive impact The pre-training scheme has been widely deployed in the natural language processing field. It proposes to train a large model by self-supervised learning on large corpus at first and then fine-tune the model on downstream tasks quickly. Such a pre-training scheme has produced a series of powerful language models and BERT is one of the most popular one. In this work, we developed a new pre-training based language understanding model, ConvBERT. It offers smaller model size, lower training cost and better performance, compared with the BERT model. ConvBERT has multiple positive impacts. In contrary to the trend of further increasing model complexity for better performance, ConvBERT turns to making the model more efficient and saving the training cost. It will benefit the applications where the computation resource is limited. In terms of the methodology, it looks into the model backbone designs, instead of using distillation-alike algorithms that still require training a large teacher model beforehand, to make the model more efficient. We encourage researchers to build NLP models based on ConvBERT for tasks we can expect to be particularly beneficial, such as text-based counselling. Negative impact Compared with BERT, ConvBERT is more efficient and saves the training cost, which can be used to detect and understand personal text posts on social platforms and brings privacy threat.",193 | |
Stochastic Gradient Descent in Correlated Settings: A Study on Gaussian Processes,https://proceedings.neurips.cc/paper/2020/file/1cb524b5a3f3f82be4a7d954063c07e2-Paper.pdf,"Practitioners in various areas including, but not limited to, machine learning, statistics and optimization can benefit from applying our proposed framework. Our framework does not use any bias in the data or sensitive information. We do not foresee any negative outcomes on ethical aspects or future societal consequences.",194 | |
BRP-NAS: Prediction-based NAS using GCNs,https://proceedings.neurips.cc/paper/2020/file/768e78024aa8fdb9b8fe87be86f64745-Paper.pdf,"This research can democratize on-device deployment with cost-efficient NAS methodology for model optimization within device latency constraints. Additionally, carbon footprint of traditionally expensive NAS methods is vastly reduced. On the other hand, measurement and benchmarking data can be used both to create new NAS methodologies, and to gain further insights about the device performance. This can bridge the machine learning and device research communities together.",195 | |
Glow-TTS: A Generative Flow for Text-to-Speech via Monotonic Alignment Search,https://proceedings.neurips.cc/paper/2020/file/5c3b99e8f92532e5ad1556e53ceea00c-Paper.pdf,"In this paper, researchers introduce Glow-TTS, a diverse, robust and fast text-to-speech (TTS) synthesis model. Neural TTS models including Glow-TTS, could be applied in many applications which require naturally synthesized speech. Some of the applications are AI voice assistant services, audiobook services, advertisements, automotive navigation systems and automated answering services. Therefore, by utilizing the models for synthesizing natural sounding speech, the providers of such applications could improve user satisfaction. In addition, the fast synthesis speed of the proposed model could be beneficial for some service providers who provide real time speech synthesis services. However, because of the ability to synthesize natural speech, the TTS models could also be abused through cyber crimes such as fake news or phishing. It means that TTS models could be used to impersonate voices of celebrities for manipulating behaviours of people, or to imitate voices of someone’s friends or family for fraudulent purposes. With the development of speech synthesis technology, the growth of studies to detect real human voice from synthesized voices seems to be needed. Neural TTS models could sometimes synthesize undesirable speech with slurry or wrong pronunciations. Therefore, it should be used carefully in some domain where even a single pronunciation mistake is critical such as news broadcast. Additional concern is about the training data. Many corpus for speech synthesis contain speech data uttered by a handful of speakers. Without the detailed consideration and restriction about the range of uses the TTS models have, the voices of the speakers could be overused than they might expect.",196 | |
Structured Prediction for Conditional Meta-Learning,https://proceedings.neurips.cc/paper/2020/file/1b69ebedb522700034547abc5652ffac-Paper.pdf,"Meta-learning aims to construct learning models capable of learning from experiences, Its intended users are thus primarily non-experts who require automated machine learning services, which may occur in a wide range of potential applications such as recommender systems and autoML. The authors do not expect the work to address or introduce any societal or ethical issues.",197 | |
Constant-Expansion Suffices for Compressed Sensing with Generative Priors,https://proceedings.neurips.cc/paper/2020/file/9fa83fec3cf3810e5680ed45f7124dce-Paper.pdf,"Our main contributions are mathematical in nature. We establish the notion of pseudo-Lipschitzness , along with a concentration inequality for random pseudo-Lipschitz functions, and random matrices, and we use our results to further the theoretical understanding of the non-convex optimization landscape arising in compressed sensing with deep generative priors. We foresee applications of our theorems in probability theory, learning theory, as well as inverse optimization problems involving deep neural networks. That said, compressed sensing with deep generative priors is of practical relevance as well. As shown in recent work, in the low number of measurements regime, compressed sensing with a deep generative prior may significantly outperform compressed sensing with a sparsity assumption. We emphasize, however, that users of a deep generative prior in compressed sensing (or other inverse problems) should be cognizant of the risk that the prior may introduce bias in the reconstruction. Indeed, the deep generative model was trained on data which might be biased, and even if it is not biased the training of the deep generative model might have failed for statistical or optimization reasons, resulting in a biased trained model. So the reconstruction will only be as good as the deep generative model is, as the reconstructed signal is in the range of the deep generative model. To conclude, our contributions are methodological but a prime application of our techniques is to improve the understanding of optimization problems arising in inverse problems involving a deep generative model. The users of deep generative priors in practical scenarios must be careful about the potential biases that their priors introduce. Their provenance and quality must be understood.",198 | |
"Lamina-specific neuronal properties promote robust, stable signal propagation in feedforward networks",https://proceedings.neurips.cc/paper/2020/file/1fc214004c9481e4c8073e85323bfd4b-Paper.pdf,"Many efforts have been paid to understand the critical components of highly cognitive systems like the human brain. Studies have argued for simulations of large brain-scale neural networks as an indispensable tool ( De Garis et al., 2010). Still, they almost always fail to consider cellular diversity in the brain, whereas more and more experimental data are revealing its importance. Our computational study suggests that heterogeneity in neuronal properties is critical in information transfer within a neural circuit and it should not be ignored, especially when the neural pathway has many feedforward layers. For deep-learning research, our work also provides a new insight for neural architecture search (NAS) (Elsken et al., 2019). The search space of existing NAS methods are mainly (1) the combination of heterogeneous layers to form an entire network; (2) the combination of heterogeneous activation functions to form a cell. However, our work suggests a novel, computationally efficient strategy, that is searching for a block structure consisted of several layers (In our case, the block is composed of a integrator layer followed by a differentiator layer). On the one hand, the block should boost stable propagation of input signals into deep layers. Hence, divergence of inputs will remain detectable in the output layer at the initial phase of learning, which is suggested to accelerates the training of very deep networks (Samuel S Schoenholz and Sohl-Dickstein, 2017; Srivastava et al., 2015). On the other hand, there is also extra freedom of searching the block structure that does not suffer from vanishing/exploding backpropagation gradients (like a residual block (He et al., 2016)) Our deep FFN models are proof-of-concept and lack many other neural circuit mechanisms that can affect signal propagation in spiking neural networks, as we discuss in Section 2, although we did find that an additional component, feedforward inhibition, did not significantly change the results (Appendix Fig. A2,A3). Our study suggests that the cooperation between different types of neurons is vital for promoting signal processing in large-scale networks. It also suggests investigating the roles of heterogeneous neuronal properties in other problems such as sensory coding, short-term memory, and others, in the future studies.",199 | |