# Datasets:scitldr

Languages: English
Multilinguality: monolingual
Size Categories: 1K<n<10K
Language Creators: found
Annotations Creators: no-annotation
Source Datasets: original
ArXiv:
Dataset Preview
source (sequence)source_labels (sequence)rouge_scores (sequence)paper_id (string)ic (unknown)target (sequence)
[ "Due to the success of deep learning to solving a variety of challenging machine learning tasks, there is a rising interest in understanding loss functions for training neural networks from a theoretical aspect.", "Particularly, the properties of critical points and the landscape around them are of importance to determine the convergence performance of optimization algorithms.", "In this paper, we provide a necessary and sufficient characterization of the analytical forms for the critical points (as well as global minimizers) of the square loss functions for linear neural networks.", "We show that the analytical forms of the critical points characterize the values of the corresponding loss functions as well as the necessary and sufficient conditions to achieve global minimum.", "Furthermore, we exploit the analytical forms of the critical points to characterize the landscape properties for the loss functions of linear neural networks and shallow ReLU networks.", "One particular conclusion is that: While the loss function of linear networks has no spurious local minimum, the loss function of one-hidden-layer nonlinear networks with ReLU activation function does have local minimum that is not global minimum.", "In the past decade, deep neural networks BID8 have become a popular tool that has successfully solved many challenging tasks in a variety of areas such as machine learning, artificial intelligence, computer vision, and natural language processing, etc.", "As the understandings of deep neural networks from different aspects are mostly based on empirical studies, there is a rising need and interest to develop understandings of neural networks from theoretical aspects such as generalization error, representation power, and landscape (also referred to as geometry) properties, etc.", "In particular, the landscape properties of loss functions (that are typically nonconex for neural networks) play a central role to determine the iteration path and convergence performance of optimization algorithms.One major landscape property is the nature of critical points, which can possibly be global minima, local minima, saddle points.", "There have been intensive efforts in the past into understanding such an issue for various neural networks.", "For example, it has been shown that every local minimum of the loss function is also a global minimum for shallow linear networks under the autoencoder setting and invertibility assumptions BID1 and for deep linear networks BID11 ; BID14 ; Yun et al. (2017) respectively under different assumptions.", "The conditions on the equivalence between local minimum or critical point and global minimum has also been established for various nonlinear neural networks Yu & Chen (1995) ; BID9 ; BID15 ; BID17 ; BID6 under respective assumptions.However, most previous studies did not provide characterization of analytical forms for critical points of loss functions for neural networks with only very few exceptions.", "In BID1 , the authors provided an analytical form for the critical points of the square loss function of shallow linear networks under certain conditions.", "Such an analytical form further helps to establish the landscape properties around the critical points.", "Further in BID13 , the authors characterized certain sufficient form of critical points for the square loss function of matrix factorization problems and deep linear networks.The focus of this paper is on characterizing the sufficient and necessary forms of critical points for broader scenarios, i.e., shallow and deep linear networks with no assumptions on data matrices and network dimensions, and shallow ReLU networks over certain parameter space.", "In particular, such analytical forms of critical points capture the corresponding loss function values and the necessary and sufficient conditions to achieve global minimum.", "This further enables us to establish new landscape properties around these critical points for the loss function of these networks under general settings, and provides alternative (yet simpler and more intuitive) proofs for existing understanding of the landscape properties.OUR CONTRIBUTION", "1) For the square loss function of linear networks with one hidden layer, we provide a full (necessary and sufficient) characterization of the analytical forms for its critical points and global minimizers.", "These results generalize the characterization in BID1 to arbitrary network parameter dimensions and any data matrices.", "Such a generalization further enables us to establish the landscape property, i.e., every local minimum is also a global minimum and all other critical points are saddle points, under no assumptions on parameter dimensions and data matrices.", "From a technical standpoint, we exploit the analytical forms of critical points to provide a new proof for characterizing the landscape around the critical points under full relaxation of assumptions, where the corresponding approaches in BID1 are not applicable.", "As a special case of linear networks, the matrix factorization problem satisfies all these landscape properties.2) For the square loss function of deep linear networks, we establish a full (necessary and sufficient) characterization of the analytical forms for its critical points and global minimizers.", "Such characterizations are new and have not been established in the existing art.", "Furthermore, such analytical form divides the set of non-global-minimum critical points into different categories.", "We identify the directions along which the loss function value decreases for two categories of the critical points, for which our result directly implies the equivalence between the local minimum and the global minimum.", "For these cases, our proof generalizes the result in BID11 under no assumptions on the network parameter dimensions and data matrices.3) For the square loss function of one-hidden-layer nonlinear neural networks with ReLU activation function, we provide a full characterization of both the existence and the analytical forms of the critical points in certain types of regions in the parameter space.", "Particularly, in the case where there is one hidden unit, our results fully characterize the existence and the analytical forms of the critical points in the entire parameter space.", "Such characterization were not provided in previous work on nonlinear neural networks.", "Moreover, we apply our results to a concrete example to demonstrate that both local minimum that is not a global minimum and local maximum do exist in such a case.", "In this paper, we provide full characterization of the analytical forms of the critical points for the square loss function of three types of neural networks, namely, shallow linear networks, deep linear networks, and shallow ReLU nonlinear networks.", "We show that such analytical forms of the critical points have direct implications on the values of the corresponding loss functions, achievement of global minimum, and various landscape properties around these critical points.", "As a consequence, the loss function for linear networks has no spurious local minimum, while such point does exist for nonlinear networks with ReLU activation.", "In the future, it is interesting to further explore nonlinear neural networks.", "In particular, we wish to characterize the analytical form of critical points for deep nonlinear networks and over the full parameter space.", "Such results will further facilitate the understanding of the landscape properties around these critical points." ]
[ 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.3018867874688502, 0.37209301838831804, 0.6037735799216805, 0.571428566430654, 0.7234042503395203, 0.15094339124243522, 0.1612903177679502, 0.2222222174351223, 0.34782608233564377, 0.2380952332766441, 0.18749999523925795, 0.35897435461867194, 0.3829787184246266, 0.35897435437212366, 0.32432431985025567, 0.46808510140334997, 0.40677965613329503, 0.44444443947187934, 0.1463414586555623, 0.19672130663800066, 0.3859649073561096, 0.4516128984131114, 0.105263153393352, 0.25641025180802113, 0.27450979892349103, 0.3561643790579847, 0.32653060724698046, 0.10810810372534715, 0.08163264806330726, 0.5185185135459535, 0.4999999950073965, 0.16666666167534736, 0.21621621183345519, 0.4347826037334594, 0.35897435437212366 ]
"SysEexbRb"
true
[ "We provide necessary and sufficient analytical forms for the critical points of the square loss functions for various neural networks, and exploit the analytical forms to characterize the landscape properties for the loss functions of these neural networks." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0 ]
[ 0, 0, 0.13043477920604923, 0.14285713922902502, 0, 0.11764705467128042, 0, 0.11111110709876558, 0.06666666222222252, 0, 0.047619043990929984, 0, 0, 0, 0.07272726975206624, 0.08333332847222251, 0.047619043990929984, 0.05714285306122478, 0.08163264981257823, 0, 0.027397257909551716, 0.09677419084287209, 0.11764705467128042, 0, 0, 0.05714285306122478, 0.23076922603550304, 0.14814814348422511, 0, 0.21052631080332423, 0, 0.08695651682419688, 0.17647058408304508 ]
"SygvZ209F7"
true
[ "Biologically plausible learning algorithms, particularly sign-symmetry, work well on ImageNet" ]
[ "We introduce the 2-simplicial Transformer, an extension of the Transformer which includes a form of higher-dimensional attention generalising the dot-product attention, and uses this attention to update entity representations with tensor products of value vectors.", "We show that this architecture is a useful inductive bias for logical reasoning in the context of deep reinforcement learning.\n", "Deep learning contains many differentiable algorithms for computing with learned representations.", "These representations form vector spaces, sometimes equipped with additional structure.", "A recent example is the Transformer (Vaswani et al., 2017) in which there is a vector space V of value vectors and an inner product space H of query and key vectors.", "This structure supports a kind of messagepassing, where a value vector v j ∈ V derived from entity j is propagated to update an entity i with weight q i · k j , where q i ∈ H is a query vector derived from entity i, k j ∈ H is a key vector derived from entity j, and the inner product on H is written as a dot product.", "The Transformer therefore represents a relational inductive bias, where a relation from entity j to entity i is perceived to the extent that q i · k j is large and positive.", "However, the real world has structure beyond entities and their direct relationships: for example, the three blocks in Figure 1 are arranged in such a way that if either of the supporting blocks is removed, the top block will fall.", "This is a simple 3-way relationship between entities i, j, k that is complex to represent as a system of 2-way relationships.", "It is natural to make the hypothesis that such higher-order relationships are essential to extracting the full predictive power of data, across many domains.", "In accordance with this hypothesis, we introduce a generalisation of the Transformer architecture, the 2-simplicial Transformer, which incorporates both 2-and 3-way interactions.", "Mathematically, the key observation is that higher-order interactions between entities can be understood using algebras.", "This is nothing but Boole's insight (Boole, 1847) which set in motion the development of modern logic.", "In our situation, an appropriate algebra is the Clifford algebra Cl(H) of the space H of queries and keys, which contains that space H ⊆ Cl(H) and in which queries and keys can be multiplied.", "To represent a 3-way interaction we map each entity i to a triple (p i , l k ) using a natural continuous function η : Cl(H) −→ R associated to the Z-grading of Cl(H).", "This scalar measures how strongly the network perceives a 3-way interaction involving i, j, k.", "In summary, the 2-simplicial Transformer learns how to represent entities in its environment as vectors v ∈ V , and how to transform those entities to queries and (pairs of) keys in H, so that the signals provided by the scalars q i · k j and η(p i l 1 j l 2 k ) are informative about higher-order structure in the environment.", "As a toy example of higher-order structure, we consider the reinforcement learning problem in a variant of the BoxWorld environment from (Zambaldi et al., 2019) .", "The original BoxWorld is played on a rectangular grid populated by keys and locked boxes of varying colours, with the goal being to open the box containing the \"Gem\".", "In our variant of the BoxWorld environment, bridge BoxWorld, the agent must use two keys simultaneously to obtain the Gem; this structure in the environment creates many 3-way relationships between entities, including for example the relationship between the locked boxes j, k providing the two keys and the Gem entity i.", "This structure in the environment is fundamentally logical in nature, and encodes a particular kind of conjunction; see Appendix I.", "The architecture of our deep reinforcement learning agent largely follows (Zambaldi et al., 2019) and the details are given in Section 4.", "The key difference between our simplicial agent and the relational agent of (Zambaldi et al., 2019) is that in place of a standard Transformer block we use a 2-simplicial Transformer block.", "Our experiments show that the simplicial agent confers an advantage over the relational agent as an inductive bias in our reasoning task.", "Motivation from neuroscience for a simplicial inductive bias for abstract reasoning is contained in Appendix J.", "Our use of tensor products of value vectors is inspired by the semantics of linear logic in vector spaces (Girard, 1987; Mellis, 2009; Clift & Murfet, 2017; Wallbridge, 2018) in which an algorithm with multiple inputs computes on the tensor product of those inputs, but this is an old idea in natural language processing, used in models including the second-order RNN (Giles et al., 1989; Pollack, 1991; Goudreau et al., 1994; Giles et al., 1991) , multiplicative RNN (Sutskever et al., 2011; Irsoy & Cardie, 2015) , Neural Tensor Network (Socher et al., 2013 ) and the factored 3-way Restricted Boltzmann Machine (Ranzato et al., 2010) , see Appendix A. Tensors have been used to model predicates in a number of neural network architectures aimed at logical reasoning (Serafini & Garcez, 2016; Dong et al., 2019) .", "The main novelty in our model lies in the introduction of the 2-simplicial attention, which allows these ideas to be incorporated into the Transformer architecture.", "On general grounds one might expect that in the limit of infinite experience, any reinforcement learning agent with a sufficiently deep neural network will be able to solve any environment, in-cluding those like bridge BoxWorld that involve higher-order relations between entities.", "In practice, however, we do not care about the infinite computation limit.", "In the regime of bounded computation it is reasonable to introduce biases towards learning representations of structures that are found in a wide range of environments that we consider important.", "We argue that higher-order relations between entities are an important example of such structures, and that the 2-simplicial Transformer is a natural inductive bias for 3-way interactions between entities.", "We have given preliminary evidence for the utility of this bias by showing that in the bridge BoxWorld environment the simplicial agent has better performance than a purely relational agent, and that this performance involves in a meaningful way the prediction of 3-way interactions (or 2-simplices).", "We believe that simplicial Transformers may be useful for any problem in which higher-order relations between entities are important.", "The long history of interactions between logic and algebra is a natural source of inspiration for the design of inductive biases in deep learning.", "In this paper we have exhibited one example: Boole's idea, that relationships between entities can be modeled by multiplication in an algebra, may be realised in the context of deep learning as an augmentation to the Transformer architecture using Clifford algebras of spaces of representations." ]
[ 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.33333332839506175, 0.8888888839111112, 0.11428570997551038, 0, 0.2692307642603551, 0.15151514688705248, 0.27999999500800005, 0.27118643585176677, 0.18181817685950424, 0.1739130384877128, 0.31111110613333337, 0.1538461491124262, 0.1951219463652589, 0.24489795418575602, 0.11111110617283973, 0.10256409783037496, 0.16666666222222234, 0.25531914393843375, 0.19607842638985018, 0.18461537995739655, 0.32558139041644135, 0.34042552691715716, 0.35294117148788934, 0.32558139041644135, 0.35897435424063123, 0.14062499695312505, 0.26086956022684316, 0.25396824925170075, 0.05555555111111147, 0.31372548521337956, 0.47999999500800006, 0.32786884768610597, 0.23255813460248795, 0.434782603705104, 0.3174603127437642 ]
"rkecJ6VFvr"
true
[ "We introduce the 2-simplicial Transformer and show that this architecture is a useful inductive bias for logical reasoning in the context of deep reinforcement learning." ]
[ "We present Tensor-Train RNN (TT-RNN), a novel family of neural sequence architectures for multivariate forecasting in environments with nonlinear dynamics.", "Long-term forecasting in such systems is highly challenging, since there exist long-term temporal dependencies, higher-order correlations and sensitivity to error propagation.", "Our proposed tensor recurrent architecture addresses these issues by learning the nonlinear dynamics directly using higher order moments and high-order state transition functions.", "Furthermore, we decompose the higher-order structure using the tensor-train (TT) decomposition to reduce the number of parameters while preserving the model performance.", "We theoretically establish the approximation properties of Tensor-Train RNNs for general sequence inputs, and such guarantees are not available for usual RNNs.", "We also demonstrate significant long-term prediction improvements over general RNN and LSTM architectures on a range of simulated environments with nonlinear dynamics, as well on real-world climate and traffic data.", "One of the central questions in science is forecasting: given the past history, how well can we predict the future?", "In many domains with complex multivariate correlation structures and nonlinear dynamics, forecasting is highly challenging since the system has long-term temporal dependencies and higher-order dynamics.", "Examples of such systems abound in science and engineering, from biological neural network activity, fluid turbulence, to climate and traffic systems (see FIG0 ).", "Since current forecasting systems are unable to faithfully represent the higher-order dynamics, they have limited ability for accurate long-term forecasting.", "Therefore, a key challenge is accurately modeling nonlinear dynamics and obtaining stable long-term predictions, given a dataset of realizations of the dynamics.", "Here, the forecasting problem can be stated as follows: how can we efficiently learn a model that, given only few initial states, can reliably predict a sequence of future states over a long horizon of T time-steps?", "Common approaches to forecasting involve linear time series models such as auto-regressive moving average (ARMA), state space models such as hidden Markov model (HMM), and deep neural networks.", "We refer readers to a survey on time series forecasting by BID2 and the references therein.", "A recurrent neural network (RNN), as well as its memory-based extensions such as the LSTM, is a class of models that have achieved good performance on sequence prediction tasks from demand forecasting BID5 to speech recognition BID15 and video analysis BID9 .", "Although these methods can be effective for short-term, smooth dynamics, neither analytic nor data-driven learning methods tend to generalize well to capturing long-term nonlinear dynamics and predicting them over longer time horizons.To address this issue, we propose a novel family of tensor-train recurrent neural networks that can learn stable long-term forecasting.", "These models have two key features: they", "1) explicitly model the higher-order dynamics, by using a longer history of previous hidden states and high-order state interactions with multiplicative memory units; and", "2) they are scalable by using tensor trains, a structured low-rank tensor decomposition that greatly reduces the number of model parameters, while mostly preserving the correlation structure of the full-rank model.In this work, we analyze Tensor-Train RNNs theoretically, and also experimentally validate them over a wide range of forecasting domains.", "Our contributions can be summarized as follows:• We describe how TT-RNNs encode higher-order non-Markovian dynamics and high-order state interactions.", "To address the memory issue, we propose a tensor-train (TT) decomposition that makes learning tractable and fast.•", "We provide theoretical guarantees for the representation power of TT-RNNs for nonlinear dynamics, and obtain the connection between the target dynamics and TT-RNN approximation. In", "contrast, no such theoretical results are known for standard recurrent networks.• We", "validate TT-RNNs on simulated data and two real-world environments with nonlinear dynamics (climate and traffic). Here", ", we show that TT-RNNs can forecast more accurately for significantly longer time horizons compared to standard RNNs and LSTMs.", "In this work, we considered forecasting under nonlinear dynamics.We propose a novel class of RNNs -TT-RNN.", "We provide approximation guarantees for TT-RNN and characterize its representation power.", "We demonstrate the benefits of TT-RNN to forecast accurately for significantly longer time horizon in both synthetic and real-world multivariate time series data.As we observed, chaotic dynamics still present a significant challenge to any sequential prediction model.", "Hence, it would be interesting to study how to learn robust models for chaotic dynamics.", "In other sequential prediction settings, such as natural language processing, there does not (or is not known to) exist a succinct analytical description of the data-generating process.", "It would be interesting to further investigate the effectiveness of TT-RNNs in such domains as well." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0 ]
[ 0.06666666222222252, 0.06451612466181092, 0.06060605638200213, 0.13793102996432832, 0.06666666222222252, 0.052631575069252354, 0, 0.05882352525951587, 0, 0.06896551272294917, 0, 0.14285713922902502, 0.11428571020408178, 0.1538461491124262, 0.04081632328196611, 0.1724137902497028, 0, 0.06060605638200213, 0.14814814513031557, 0, 0.071428566836735, 0, 0, 0, 0.19999999555555567, 0.14814814348422511, 0, 0.04347825746691899, 0, 0, 0 ]
"HJJ0w--0W"
true
[ "Accurate forecasting over very long time horizons using tensor-train RNNs" ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.2777777727932099, 0.5714285666581633, 0.09523809215419511, 0.3428571378612245, 0, 0.22222221723765442, 0.1714285664326532, 0.12903225311134256, 0.11764705382352963, 0.2424242374288339, 0.13333332863209893, 0.16216215719503302, 0.18749999501953143, 0.05882352441176514, 0.04999999511250048, 0.17241378895957202, 0.17910447382490538, 0.1578947318975071, 0.18749999501953143, 0.20512820021038802, 0.05882352441176514, 0.11764705437908515, 0.17647058323529427, 0.27586206482164094, 0.2941176420588236, 0.11764705382352963, 0, 0.6666666616820989, 0.22222221723765442, 0.1578947318975071, 0.10810810314097904, 0.16666666168209893, 0.06666666175555591, 0.35294117147058834, 0.15384614892833676 ]
"HyH9lbZAW"
true
[ "We propose a variational message-passing algorithm for models that contain both the deep model and probabilistic graphical model." ]
[ "Modern deep neural networks have a large amount of weights, which make them difficult to deploy on computation constrained devices such as mobile phones.", "One common approach to reduce the model size and computational cost is to use low-rank factorization to approximate a weight matrix.", "However, performing standard low-rank factorization with a small rank can hurt the model expressiveness and significantly decrease the performance.", "In this work, we propose to use a mixture of multiple low-rank factorizations to model a large weight matrix, and the mixture coefficients are computed dynamically depending on its input.", "We demonstrate the effectiveness of the proposed approach on both language modeling and image classification tasks.", "Experiments show that our method not only improves the computation efficiency but also maintains (sometimes outperforms) its accuracy compared with the full-rank counterparts.", "Modern neural networks usually contain millions of parameters BID4 BID8 , and they are difficult to be deployed on mobile devices with limited computation resources.", "To solve this problem, model compression techniques are proposed in recent years.", "Low-rank factorization is a popular way of reducing the matrix size.", "It has been extensively explored in the literature BID5 BID6 BID3 BID10 .", "Mathematically, a large weight matrix W ∈ R m×n is factorized to two small rank-d matrices U ∈ R m×d , V ∈ R n×d with W = U V T .", "Since both U and V are dense, no sparsity support is required from specialized hardware.", "It naturally fits the general-purpose, off-the-shelf CPUs and GPUs.To significantly reduce the model size and computation, the rank d in the low-rank factorization needs to be small.", "However, a small rank can limit the expressiveness of the model BID9 and lead to worse performance.", "To understand the limitations, given a n-dim feature vector h, we observe that DISPLAYFORM0 , is a linear projection from a high-dimensional space (n dims) to a low-dimensional space (d dims).", "This can lead to a significant loss of information.", "The conflict between the rank d and the model expressiveness prevents us from obtaining a both compact and accurate model.To address the dilemma, we propose to increase the expressiveness by learning an adaptive, inputdependent factorization, rather than performing a fixed factorization of a weight matrix.", "To do so, we use a mixture of multiple low-rank factorizations.", "The mixing weights are computed based on the input.", "This creates an adaptive linear projection from a high-dimensional space to a low-dimensional space.", "Compared to the conventional low-rank factorization, the proposed approach can significantly improve its performance while only introducing a small additional cost.", "DISPLAYFORM1 where z can be treated as the middle layer.", "Techniques like pooling can be applied to compute π to make it efficient." ]
[ 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.04651162297458138, 0.2105263107894738, 0.1621621571658146, 0.13043477775992457, 0.23529411271626308, 0.09756097063652613, 0.09090908600206637, 0, 0.06666666202222254, 0, 0.04651162297458138, 0.11764705389273376, 0.18604650669551123, 0.11428570932244919, 0.08888888400987681, 0.0714285670663268, 0.17543859204678372, 0.06666666202222254, 0, 0.06451612428720119, 0.15384614884944134, 0, 0.06451612428720119 ]
"B1eHgu-Fim"
true
[ "A simple modification to low-rank factorization that improves performances (in both image and language tasks) while still being compact." ]
[ 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.2222222172246915, 0.30434782108695657, 0.15999999503200013, 0.2222222172246915, 0.21818181331570258, 0.19047618552154208, 0.2499999950086806, 0.10526315308094822, 0.14634145848899482, 0.08510637798098716, 0.17910447310314112, 0.12698412234819872, 0.1632653011411913, 0.1999999950320001, 0.14999999511250017, 0.20833332834201398, 0.15999999503200013, 0.2711864359207125, 0, 0.19672130677774805, 0.27027026556610667, 0.22727272228305795, 0.1860465066522446, 0.13333332833580264, 0.08695651673913073, 0.10256409772518103, 0.2857142808737246, 0.14285713790249452, 0.24390243409875084, 0.09090908636822796 ]
"S1e0ZlHYDB"
true
[ "We propose a simple, general, and space-efficient data format to accelerate deep learning training by allowing sample fidelity to be dynamically selected at training time" ]
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
"rylUOn4Yvr"
true
[ "ROBUST DISCRIMINATIVE REPRESENTATION LEARNING VIA GRADIENT RESCALING: AN EMPHASIS REGULARISATION PERSPECTIVE" ]
[ 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.08695651720226867, 0.04999999625000029, 0.25925925432098773, 0.173913038941399, 0.13333332888888905, 0.14285713788265325, 0.3548387046826223, 0.08695651720226867, 0.08888888444444466, 0.22222221728395072, 0.13636363202479354, 0.15384615029585808, 0.09090908657024814, 0.13953487950243387, 0.19999999500000015, 0.10526315457063723, 0.14285713877551035, 0.173913038941399, 0.15384615029585808, 0.1481481432098767, 0.13333332888888905, 0.04651162368848064, 0.1951219472932779, 0.12499999531250018, 0.15999999520000013, 0.08888888444444466, 0.15094339131363493, 0.20253164085883682, 0.11538461050295878, 0.12698412199546508, 0.16949152042516535, 0.16326530137442746, 0.12765956985061133, 0.31034482259215224, 0.1764705833044984, 0.20930232103839924, 0.06666666166666704, 0.11764705397923896, 0.17777777333333344, 0.0454545411157029, 0.13333332888888905, 0.14814814348422511, 0.1355932153404196 ]
"ryj38zWRb"
true
[ "Are GANs successful because of adversarial training or the use of ConvNets? We show a ConvNet generator trained with a simple reconstruction loss and learnable noise vectors leads many of the desirable properties of a GAN." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.14814814397805223, 0.1818181781450873, 0.12499999722222227, 0.23076922650887582, 0.14285713877551035, 0.17647058463667825, 0, 0.05263157562326891, 0.0869565188657846, 0.14285713877551035, 0, 0.08333333055555565, 0, 0.06666666275555579, 0.14285713877551035, 0.05405405066471898, 0.09523809215419511, 0.29411764346020763, 0.1714285679020409, 0.060606056932966244, 0.19047618575963732, 0.05555555209876565, 0.04999999680000021, 0, 0, 0, 0, 0, 0.045454542479339034, 0.07142856734693902, 0.0975609724687687, 0.1052631530193908, 0, 0, 0.25806451229968785, 0.319999995648, 0.1290322542351718, 0.22222221876543213, 0.1935483832674298, 0.37037036620027436, 0, 0.09999999520000023, 0.06896551324613578, 0.2727272680991736, 0.2758620649702735, 0.15384614958579892, 0, 0.062499996250000224, 0, 0.06896551324613578, 0.08695651720226867, 0.14814814397805223, 0.09090908628099197, 0.1818181781450873, 0.08333332888888913 ]
"HJxhWa4KDr"
true
[ "Equip MMD GANs with a new random-forest kernel." ]
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.41666666180555556, 0.06896551272294917, 0.12121211698806258, 0.10526315401662063, 0.05405405010956932, 0.20689654720570758, 0.14634145972635346, 0.06451612466181092, 0.23529411349480975, 0.09090908595041348, 0.2424242382001837, 0, 0.047619043990929984, 0, 0.29411764290657444, 0.055555551543210166, 0.06249999570312531, 0.071428566836735, 0.2499999957031251, 0.071428566836735, 0.13793102996432832, 0.13333332888888905, 0.19512194753123147, 0.23255813596538674, 0, 0.06060605638200213, 0.06249999570312531, 0, 0.23076922603550304, 0, 0.08333332847222251, 0, 0.09090908595041348, 0.09999999500000027 ]
"r1xyayrtDS"
true
[ "A method for more accurate critic estimates in reinforcement learning." ]
[ "We introduce a systematic framework for quantifying the robustness of classifiers to naturally occurring perturbations of images found in videos.", "As part of this framework, we construct ImageNet-Vid-Robust, a human-expert--reviewed dataset of 22,668 images grouped into 1,145 sets of perceptually similar images derived from frames in the ImageNet Video Object Detection dataset.", "We evaluate a diverse array of classifiers trained on ImageNet, including models trained for robustness, and show a median classification accuracy drop of 16\\%.", "Additionally, we evaluate the Faster R-CNN and R-FCN models for detection, and show that natural perturbations induce both classification as well as localization errors, leading to a median drop in detection mAP of 14 points.", "Our analysis shows that natural perturbations in the real world are heavily problematic for current CNNs, posing a significant challenge to their deployment in safety-critical environments that require reliable, low-latency predictions.", "Despite their strong performance on various computer vision benchmarks, convolutional neural networks (CNNs) still have many troubling failure modes.", "At one extreme,padversarial examples can cause large drops in accuracy for state of the art models with visually imperceptible changes to the input image BID4 .", "But since carefully craftedpperturbations are unlikely to occur naturally in the real world, they usually do not pose a problem outside a fully adversarial context.To study more realistic failure modes, researchers have investigated benign image perturbations such as rotations & translations, colorspace changes, and various image corruptions [7, 8, 4] .", "However, it is still unclear whether these perturbations reflect the robustness challenges commonly arising in real data since the perturbations also rely on synthetic image modifications.Recent work has therefore turned to videos as a source of naturally occurring perturbations of images [6, BID0 . In contrast to other failure modes, the perturbed images are taken from existing image data without further modifications that make the task more difficult. As a result, robustness to such perturbations directly corresponds to performance improvements on real data. However, it is currently unclear to what extent such video perturbations pose a significant robustness challenge. Azulay and Weiss BID0 only provide anecdotal evidence from a small number of videos. While [6] work with a larger video dataset to obtain accuracy estimates, they only observe a small drop in accuracy of around 2.7% on videoperturbed images, suggesting that small perturbations in videos may not actually reduce the accuracy of current CNNs significantly.We address this question by conducting a thorough evaluation of robustness to natural perturbations arising in videos.", "As a cornerstone of our investigation, we introduce ImageNet-Vid-Robust, a carefully curated subset of ImageNet-Vid [12] .", "In contrast to earlier work, all images in ImageNet-Vid-Robust were screened by a set of expert labelers to ensure a high annotation quality and to minimize selection biases that arise when filtering with CNNs.", "Overall, ImageNet-Vid-Robust contains 22,668 images grouped into 1,145 sets of temporally adjacent and visually similar images of a total of 30 classes.We then utilize ImageNet-Vid-Robust to measure the accuracy of current CNNs to small, naturally occurring perturbations.", "Our testbed contains over 40 different model types, varying both architecture and training methodology (adversarial training, data augmentation, etc).", "We find that natural perturbations from ImageNet-Vid-Robust induce a median 16% accuracy drop for classification tasks and a median 14% drop in mAP for detection tasks.", "Even for the best-performing model, we observe an accuracy drop of 14% -significantly larger than the 2.7% drop in [6] over the same time horizon in the video.Our results show that robustness to natural perturbations in videos is indeed a significant challenge for current CNNs.", "As these models are increasingly deployed in safety-critical environments that require both high accuracy and low latency (e.g., autonomous vehicles), ensuring reliable predictions on every frame of a video is an important direction for future work." ]
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.999999995, 0.2127659526301495, 0.2499999950125001, 0.26923076459319534, 0.24999999521701396, 0, 0.23255813460248795, 0.1764705842084776, 0.18181817935376496, 0.18181817693296617, 0.1999999952880001, 0.3529411717954633, 0, 0.2499999950125001, 0.30508474139615055, 0.14035087274853814 ]
"SklRoy3qaN"
true
[ "We introduce a systematic framework for quantifying the robustness of classifiers to naturally occurring perturbations of images found in videos." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0 ]
[ 0.12121211643709846, 0.181818177043159, 0.05405404949598286, 0.105263153393352, 0.19047618620181417, 0, 0.18181817698347122, 0.14285713788265325, 0.11111110649691378, 0.1333333284222224, 0.06451612416233128, 0.22222221722908106, 0, 0.1785714250063776, 0.0888888847802471, 0.12903225319458916, 0.17241378962544596, 0, 0, 0.071428566454082, 0, 0.15384614884615402, 0.12244897569346119, 0.181818177043159, 0.17142856675918378, 0.22222221760802477, 0.06249999517578163 ]
"r1MCjkn5pV"
true
[ "Deep learning for structured tabular data machine learning using pre-trained CNN model from ImageNet." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.071428566836735, 0.0740740694101512, 0.2631578908587258, 0, 0.0769230721893494, 0.19999999555555567, 0.09999999625000015, 0.05882352525951587, 0.08333332847222251, 0.14285713826530627, 0.14999999625000007, 0.04651162433748, 0.08333333003472235, 0.04651162433748, 0, 0, 0.04255318813942987, 0.029850743729115833, 0, 0.06249999570312531, 0.06896551272294917, 0.14814814348422511, 0.07999999680000014, 0.14285713922902502, 0, 0.13636363285123976, 0.06060605638200213, 0.11428571020408178, 0.11111110709876558, 0.13333332888888905, 0.04999999625000029, 0.23255813596538674, 0.1904761854875285, 0.19512194753123147, 0.3076923029585799, 0.10909090611570257, 0.05405405010956932, 0.08888888543209889, 0.18181817759412314, 0.18181817685950424, 0, 0.05882352525951587, 0.06249999570312531, 0.09756097192147545, 0.11111110709876558, 0, 0.27586206444708683, 0.22222221755829916, 0.05714285306122478, 0.13793102996432832, 0.0769230721893494, 0.08888888543209889, 0, 0, 0.08333332847222251, 0.16216215821767724, 0, 0.08695651682419688, 0.055555551543210166, 0, 0.05882352525951587, 0.04255318813942987, 0.24999999513888896, 0.12903225369406884, 0.06249999570312531 ]
"SJg1lxrYwS"
true
[ "Decoding pixels can still work for representation learning on images" ]
[ 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0, 0.0714285665306126, 0.1481481432098767, 0, 0.4210526269252078, 0, 0, 0, 0.08510637917609795, 0, 0, 0, 0.08888888497777794, 0.09090908694214893, 0.05882352484429102, 0.1739130384877128, 0.1818181784066116, 0, 0.10810810372534715, 0, 0, 0.0714285665306126, 0.2439024348839977, 0.066666661866667, 0.12765957066545958, 0.2553191451335446, 0.12499999531250018, 0.19354838235171706, 0.09302325179015701 ]
"rkxd2oR9Y7"
true
[ 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.22222221752098775, 0.17857142357142872, 0.9818181768198347, 0.17543859149276717, 0.24489795428571431, 0.03389830009767382, 0.14285713785714302, 0.13953487917793417, 0.03921568132256887, 0.0869565169754256, 0.059701487672087716, 0.11267605156119838, 0.13559321535191057, 0.04081632163265365, 0.38095237601410936, 0.11764705387158807, 0.0799999950720003, 0.05714285394285733, 0.12499999513888908, 0, 0.059701487672087716, 0.11111110611797005, 0.04761904317460359, 0.048780483474122935, 0.37499999513888893, 0.23188405314849833, 0.08888888418765457, 0.1538461488757398, 0.2580645111758586 ]
"BJepraEFPr"
true
[ "In this paper, we propose to learn a dialogue system that independently parameterizes different dialogue skills, and learns to select and combine each of them through Attention over Parameters (AoP). " ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.28571428091428575, 0.2857142808163266, 0.25454544982479343, 0.25925925450617293, 0.13333332913333346, 0.14634145841760873, 0.23076922595414212, 0.2127659525033953, 0.24390243402736475, 0.24489795428571431, 0.26086956025519853, 0.20408162775510216, 0.2380952330952382, 0.24242423779614333, 0.06451612466181092, 0.07407407061728412, 0.19047618547619058, 0.13636363137396712, 0.2222222173611112, 0.16216215725346983, 0.21739129938563337, 0.44999999501250004, 0.232558134537588, 0.19047618547619058, 0.06451612466181092, 0.274509799077278, 0.26086956025519853, 0.18604650663061129, 0.19047618547619058, 0.14634145841760873, 0.21428571053571432, 0.0952380902380955, 0.12499999548828142, 0.1111111062500002, 0.15686274025374872, 0.26086956025519853, 0.38095237595238096, 0.10810810319941586, 0.055555550694444865, 0.048780482807852986, 0.07407406932098796, 0.04761904261904814, 0, 0.2857142807142858, 0.3589743540039448, 0.1111111062500002, 0.13793103048751498, 0.10810810319941586, 0.13636363137396712, 0.09999999501250025, 0.0499999950125005, 0.13793102986325817, 0.2692307644156805 ]
"ryxO3gBtPB"
true
[ "We propose to distill a large dataset into a small set of synthetic data that can train networks close to original performance. " ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.1632653018742192, 0.3124999950195313, 0.21052631101108046, 0.11764705389273376, 0.14634145877453913, 0.11428570938775531, 0.13333332833333353, 0.06249999641113302, 0.07999999520000028, 0.07999999580000022, 0, 0.05405404923301723, 0.09523809160997747, 0.05263157416897551, 0.05405404923301723, 0, 0.14285713826530627, 0.1538461489644972, 0.060606055647383326, 0.06666666166666704, 0, 0.12121211625344373, 0.10526315311634371, 0.05714285224489839, 0, 0.1724137892687278, 0.04999999531250044, 0.24489795493544364, 0.26315789156855957, 0.23809523350340142, 0.13636363186983486, 0.173913038941399, 0.10256409783037496, 0.2580645111342353, 0.14634145877453913, 0, 0.37037036543209884, 0.11428570938775531, 0.09999999531250023, 0.19999999500000015, 0, 0.06666666166666704, 0.07407406913580279, 0.1568627409457902, 0.0869565173440456, 0.09523809115646274 ]
"BJNRFNlRW"
true
[ "We propose a primal-dual subgradient method for training GANs and this method effectively alleviates mode collapse." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0 ]
[ 0.062499995000000405, 0.08333332888888913, 0.12499999555555572, 0.17142856646530627, 0.12499999555555572, 0.18749999500000014, 0.13953487904813427, 0.07142856734693902, 0.08510637848800386, 0.039215681968474136, 0.12903225306971924, 0, 0, 0.051282046443129975, 0.07017543455832588, 0.1052631530193908, 0.06666666275555579, 0, 0.17021276146672715, 0, 0.1249999950000002, 0.10256409772518103, 0.09999999520000023, 0.0952380905215422, 0.17647058325259532, 0, 0.062499995000000405, 0.11764705384083066, 0.0714285665306126, 0.20689654677764577, 0.057142852179592266, 0.10169491130135033, 0.1052631530193908 ]
"BJlo91BYPr"
true
[ "We find that irrationality from an expert demonstrator can help a learner infer their preferences. " ]
[ 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.17647058385813158, 0.04081632154935504, 0.8571428521615995, 0.09999999511250024, 0.15999999503200013, 0.13043477760869585, 0.11111110649691378, 0.15789473206371207, 0, 0.03921568132256887, 0.04761904266439961, 0.12121211698806258, 0.1428571380165818, 0.24390243409875084, 0.1428571380165818, 0.11764705444636694, 0.09302325083829123, 0.13333332833580264, 0.173913038478261, 0.19512194629387283, 0, 0.06249999595703151, 0.25531914393843375, 0.22727272228305795, 0.12499999595703139, 0.13636363137396712, 0.09756097068411684, 0.12765956947034876, 0, 0.4444444398302469, 0.19047618552154208, 0, 0.11111110649691378, 0.07692307198964529, 0.27999999503200007, 0.20512820028928347, 0.051282046443129975, 0.20512820028928347, 0.09090908591942176 ]
"SkxgBPr3iN"
true
[ "We compare robustness of models from 4 popular NLP tasks: Q&A, NLI, NER and Sentiment Analysis by testing their performance on perturbed inputs." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.20689654677764577, 0.5161290272632676, 0.2580645111342353, 0.2580645111342353, 0.23076922650887582, 0.45161289823100936, 0.2666666620839507, 0.27027026536157783, 0.22222221739369008, 0.06153845782721916, 0.28571428075102046, 0.14285713795918387, 0.06896551324613578, 0.15686274079200319, 0.11111110694101523, 0.17777777319506186, 0.42424241924701567, 0.43749999500000003, 0, 0.051282046443129975, 0.22222221739369008, 0.2777777728395062, 0.21052631091412755, 0.24999999500000009, 0.16216215725346983, 0.2608695609829868, 0.1538461490072322, 0.6153846106508877, 0.357142852244898, 0.14285713795918387, 0.3018867882378071, 0.24390243426531835, 0.27027026536157783, 0.24999999500000009, 0.16666666222222234, 0.16216215725346983, 0.062499995000000405, 0.1249999950000002, 0.21739129981096417, 0.19047618575963732, 0.20689654677764577, 0.09999999520000023, 0.21621621130752386, 0 ]
"BklTQCEtwH"
true
[ "A novel cluster-based algorithm of curriculum learning is proposed to solve the robust training of generative models." ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.19999999500000015, 0.21212120713957774, 0.28169013584606234, 0.07142856674107174, 0.33333332847222225, 0.2456140303477994, 0.2105263110495538, 0.11538461098372799, 0.08955223381599492, 0.15999999580000013, 0.09090908592745665, 0.10909090446281011, 0.05555555055941404, 0.17721518493831132, 0.16129031766389193, 0.24657533747419788, 0.0784313682429837, 0.13559321551278383, 0.2692307648298817, 0.03389830025854708, 0.2222222176611798, 0.15789473538781168, 0.1967213065842517, 0.1379310296967897, 0.19999999580000008, 0.07407406951303183, 0.5357142810267859, 0.24324323825785255, 0.037037032475995074, 0.11999999580000016, 0.3076923027449047, 0.07272726809917385, 0.09230768733727837, 0.03076922579881737, 0.15789473187326888, 0.11428570928571449, 0.08333332938368075, 0.036363631735537784, 0.08695651674018094, 0.0784313682429837, 0.09677418863163396, 0.10256410072320843, 0.190476185537919, 0.1951219463265914, 0.07547169362762575, 0, 0.06896551245541056, 0.06349205855379227, 0.09523809037698436, 0.33766233270366003, 0.31372548589004234, 0.039215681968474136, 0.2181818135537191, 0.30769230329142017, 0.11111110655006878 ]
"rkgyS0VFvr"
true
[ "We proposed a novel distributed backdoor attack on federated learning and show that it is not only more effective compared with standard centralized attacks, but also harder to be defended by existing robust FL methods" ]
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.19354838210197725, 0.06060605561065239, 0.17142856646530627, 0.062499995000000405, 0.06060605561065239, 0.049999995200000466, 0.0714285665306126, 0.14999999520000015, 0.18181817682277332, 0, 0.06896551324613578, 0.0476190429024948, 0.11111110617283973, 0.11111110617283973, 0, 0.07272726860165313, 0.04651162323418111, 0.0816326486630573, 0, 0, 0.12121211621671278, 0.11428570932244919, 0.057142852179592266, 0.062499995000000405, 0.08333332888888913, 0.22222221728395072, 0.06060605561065239, 0, 0.11428570932244919, 0.12244897519366944, 0.0952380905215422, 0.062499995000000405, 0, 0.14814814331961607, 0.05882352442906617, 0.05405404914536203, 0.06060605561065239, 0.08695651720226867, 0.062499995000000405, 0, 0.13333332875061746, 0.09090908694214893 ]
"r1g7y2RqYX"
true
[ "Neural net for graph-based semi-supervised learning; revisits the classics and propagates *labels* rather than feature representations" ]
[ 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.15789473218836578, 0, 0.39999999500800004, 0.09756097075550292, 0.09999999520000023, 0.21621621165814472, 0.14999999520000015, 0.1538461488757398, 0.18181817785123974, 0.062499995312500355, 0.20289854618777578, 0, 0.12121211658402223, 0, 0.16216215760409072, 0.19999999500800014, 0.19607842638985018, 0.1333333285333335, 0.21052631091412755, 0.20689654687277062, 0.24999999510204088, 0.038461533491124904, 0.1052631530193908, 0.049999995200000466, 0.05882352525951587, 0.11764705467128042, 0.1999999952000001, 0, 0.0645161255359003, 0.047619042721088946, 0, 0.17647058408304508, 0, 0.047619042721088946, 0.2127659524490721, 0.20338982568227532, 0.217391299357278, 0.05405404949598286, 0.2051282003944774, 0.08695651674858251, 0.038461533491124904, 0.18867924032751882, 0.062499996250000224, 0.26086956022684316, 0.20408162765514384, 0.2909090859900827, 0.37837837382030687, 0.16326530112453158, 0.09523809034013632, 0.19047618557823143, 0.14999999520000015, 0.03278688047299183, 0.10526315324099744, 0.1538461488757398, 0.11764705384083066, 0.2439024341701369, 0.19999999500800014, 0.27272726776859507, 0, 0.217391299357278, 0.2439024341701369, 0.13043477761814765, 0.34615384118343206, 0.27027026571219875, 0.11320754221431137, 0.1538461491124262, 0.09756097075550292, 0.12765956947034876, 0, 0.15686274011534043, 0.08510637798098716, 0.09999999520000023, 0, 0.15686274011534043, 0.17647058408304508, 0.23529411266435996, 0, 0.23809523319727902, 0.1224489745939194, 0.039999996352000335, 0.07692307195266304, 0.10810810355003672, 0.10810810355003672, 0.1212121172451792, 0.04255318649162576, 0.15094339127091508, 0.15789473218836578, 0.051282046548323905, 0, 0.13333332835555575, 0.10526315324099744, 0.03636363144462877, 0.1538461491881658, 0.049999995200000466, 0.06060605663911872, 0.09756097075550292, 0.3529411723183391, 0.34615384118343206, 0.05405404949598286, 0.14999999520000015, 0.29999999520000004 ]
"rkgARFTUjB"
true
[ "Neural Architecture Search for a series of Natural Language Understanding tasks. Design the search space for NLU tasks. And Apply differentiable architecture search to discover new models" ]
[ 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ 0.19047618557823143, 0.22222221777777784, 0.26315789008310253, 0.3720930183234181, 0.05714285283265339, 0.341463409779893, 0.4651162741373716, 0.13953487878853452, 0.13636363140495886, 0.18181817785123974, 0.16666666166666683, 0.15686274011534043, 0.13043477761814765, 0.22068965240998814, 0.20408162765514384, 0.04878048295062511, 0.26415093844072635, 0.4210526269252078, 0.1538461491881658, 0, 0.10810810355003672, 0.23809523319727902, 0.11267605186272583, 0.04651162297458138, 0.11320754221431137, 0, 0.18181817719008275, 0.3199999950080001, 0.05555555111111147, 0.29999999520000004, 0.32653060724698046, 0.3870967694484912, 0.17021276095971044, 0.24999999520000007, 0.10909090417190105 ]
"H1eJH3IaLN"
true
[ "In this paper we introduce EvalNE, a Python toolbox for automating the evaluation of network embedding methods on link prediction and ensuring the reproducibility of results." ]

# Dataset Card for SciTLDR

### Dataset Summary

SciTLDR: Extreme Summarization of Scientific Documents

SciTLDR is a new multi-target dataset of 5.4K TLDRs over 3.2K papers. SciTLDR contains both author-written and expert-derived TLDRs, where the latter are collected using a novel annotation protocol that produces high-quality summaries while minimizing annotation burden.

summarization

English

## Dataset Structure

SciTLDR is split in to a 60/20/20 train/dev/test split. For each file, each line is a json, formatted as follows

{
"source":[
"sent0",
"sent1",
"sent2",
...
],
"source_labels":[binary list in which 1 is the oracle sentence],
"rouge_scores":[precomputed rouge-1 scores],
"paper_id":"PAPER-ID",
"target":[
"author-tldr",
"pr-tldr0",
"pr-tldr1",
...
],
"title":"TITLE"
}


The keys rouge_scores and source_labels are not necessary for any code to run, precomputed Rouge scores are provided for future research.

### Data Instances

{ "source": [ "Mixed precision training (MPT) is becoming a practical technique to improve the speed and energy efficiency of training deep neural networks by leveraging the fast hardware support for IEEE half-precision floating point that is available in existing GPUs.", "MPT is typically used in combination with a technique called loss scaling, that works by scaling up the loss value up before the start of backpropagation in order to minimize the impact of numerical underflow on training.", "Unfortunately, existing methods make this loss scale value a hyperparameter that needs to be tuned per-model, and a single scale cannot be adapted to different layers at different training stages.", "We introduce a loss scaling-based training method called adaptive loss scaling that makes MPT easier and more practical to use, by removing the need to tune a model-specific loss scale hyperparameter.", "We achieve this by introducing layer-wise loss scale values which are automatically computed during training to deal with underflow more effectively than existing methods.", "We present experimental results on a variety of networks and tasks that show our approach can shorten the time to convergence and improve accuracy, compared with using the existing state-of-the-art MPT and single-precision floating point." ], "source_labels": [ 0, 0, 0, 1, 0, 0 ], "rouge_scores": [ 0.2399999958000001, 0.26086956082230633, 0.19999999531250012, 0.38095237636054424, 0.2051282003944774, 0.2978723360796741 ], "paper_id": "rJlnfaNYvB", "target": [ "We devise adaptive loss scaling to improve mixed precision training that surpass the state-of-the-art results.", "Proposal for an adaptive loss scaling method during backpropagation for mix precision training where scale rate is decided automatically to reduce the underflow.", "The authors propose a method to train models in FP16 precision that adopts a more elaborate way to minimize underflow in every layer simultaneously and automatically." ], "title": "Adaptive Loss Scaling for Mixed Precision Training" }

### Data Fields

• source: The Abstract, Introduction and Conclusion (AIC) or Full text of the paper, with one sentence per line.
• source_labels: Binary 0 or 1, 1 denotes the oracle sentence.
• rouge_scores: Precomputed ROUGE baseline scores for each sentence.
• paper_id: Arxiv Paper ID.
• target: Multiple summaries for each sentence, one sentence per line.
• title: Title of the paper.

### Data Splits

train valid test
SciTLDR-A 1992 618 619
SciTLDR-AIC 1992 618 619
SciTLDR-FullText 1992 618 619

## Dataset Creation

### Source Data

#### Who are the source language producers?

https://allenai.org/

### Annotations

#### Annotation process

Given the title and first 128 words of a reviewer comment about a paper, re-write the summary (if it exists) into a single sentence or an incomplete phrase. Summaries must be no more than one sentence. Most summaries are between 15 and 25 words. The average rewritten summary is 20 words long.

## Considerations for Using the Data

### Social Impact of Dataset

To encourage further research in the area of extreme summarization of scientific documents.