id
stringlengths
7
12
sentence1
stringlengths
5
1.44k
sentence2
stringlengths
6
2.06k
label
stringclasses
4 values
domain
stringclasses
5 values
train_100
Hastie [37] further generalized this approach by assuming that class distributions are a mixture of Gaussians, which has more flexibility than LDA.
both approaches assume a common covariance matrix for all the classes, which is too strict in many practical applications, especially in high-dimensional problems where the covariance matrices of different classes tend to be different.
contrasting
NeurIPS
train_101
In such problems it is crucial to identify a nominal or baseline feature distribution with respect to which statistically significant deviations can be reliably detected.
in most applications there is seldom enough information to specify the nominal density accurately, especially in high dimensional feature spaces for which the baseline shifts over time.
contrasting
NeurIPS
train_102
Suppose one were to give every layer in the RNN the largest possible skip for any graph with a period of m = 2 d 1 : set s (l) = 2 d 1 in every layer, which is the regular skip RNN setting.
this apparent advantage turns out to be a disadvantage, because time spans of 2  n < m suffer from increased path lengths, and therefore which grows linearly with m. for the proposed DILATEDRNN, where d only grows logarithmically with m, which is much smaller than that of regular skip RNN.
contrasting
NeurIPS
train_103
With such limited sampling, UCB1 spends almost all the time exploring and generates almost the same regret of 0.5 per turn as would an algorithm that pulls arms at random.
uCB1K/C is able to obtain a substantially lower regret by limiting the exploration to a subset of the arms.
contrasting
NeurIPS
train_104
The recent results in [7,8] suggest coresets that are similar to our definition of coresets (i.e., weighted subsets), and do preserve sparsity.
as mentioned above they minimize the 2-norm error and not the larger Frobesnius error, and maybe more important, they provide coresets for k-SVD (i.e., k-dimensional subspaces) and not for PCA (k-dimensional affine subspaces that might not intersect the origin).
contrasting
NeurIPS
train_105
[15] presented mirror descent using the Mahalanobis norm for the proximal function, which is very similar to the proximal function that we show to cause mirror descent to be equivalent to natural gradient descent.
their proximal function is not identical to ours and they did not discuss any possible relationship between mirror descent and natural gradient descent.
contrasting
NeurIPS
train_106
Interest in monocular depth estimation dates back to the early days of computer vision, with methods that reasoned about geometry from cues such as diffuse shading [12], or contours [13,14].
the last decade has seen accelerated progress on this task [1][2][3][4][5][6][7][8][9][10], largely owing to the availability of cheap consumer depth sensors, and consequently, large amounts of depth data for training learningbased methods.
contrasting
NeurIPS
train_107
As empirically found by [12], perplexities will saturate when n becomes large, because only a small portion of words actually exhibit long-range dependencies.
we can see that the VPYLM performance is comparable to that of HPYLM with much fewer nodes and restaurants up to n = 7 and 8, where vanilla HPYLM encounters memory overflow caused by a rapid increase in the number of parameters.
contrasting
NeurIPS
train_108
Mean field variational Bayes (MFVB) is a popular posterior approximation method due to its fast runtime on large-scale data sets.
a well known major failing of MFVB is that it underestimates the uncertainty of model variables (sometimes severely) and provides no information about model variable covariance.
contrasting
NeurIPS
train_109
An exact implementation of Bod's parsing method is still infeasible, but Goodman gives an approximation that can be implemented efficiently.
the method still suffers from the lack of justification of the parameter estimation techniques.
contrasting
NeurIPS
train_110
The middle panel suggest that 2 -path norm and spectral norm can provide some explanation for this phenomenon.
as we discussed in Section 2, the actual complexity measure based on 2 -path norm and spectral norm also depends on the number of hidden units and taking this into account indicates that these measures cannot explain this phenomenon.
contrasting
NeurIPS
train_111
We used 174,577 songs and 14,198 albums to make up the meta-training matrix ¢ , which is dimension 174,577x174,577.
note that the ¢ meta-training matrix is very sparse, since most songs only belong to 1 or 2 albums.
contrasting
NeurIPS
train_112
A sufficient condition to satisfy this definition is Hence, the set of misclustered nodes is defined as [6] M n = i ∈ {1, . . . , n} : In practice, k-means algorithm tries to find a local minimum, and hence, one should run this step with multiple initializations to achieve a global minimum.
empirically we found that good performance is achieved even if we use a single run of k-means.
contrasting
NeurIPS
train_113
This is related to recent work on Hough forests for object detection and localization [14], where leaves collect information on locations and sizes of bounding boxes of objects in training images.
they use this evidence to predict a spatial distribution of bounding boxes in a test image, whereas we use the evidence stored in tree leaves to predict the distribution ratios.
contrasting
NeurIPS
train_114
From a practical perspective, working in HS(ρ) is not computationally feasible.
our approximation to C has a representation in the finite dimensional space F, as defined here.
contrasting
NeurIPS
train_115
Also, SAGA can be analyzed without any additional synchronization per epoch.
there is no qualitative difference in these guarantees accumulated over the epoch.
contrasting
NeurIPS
train_116
If the clustering is incorrect the algorithm gets some feedback from the teacher.
the feedback in this case is different from the one in the EQ model.
contrasting
NeurIPS
train_117
For example, the bulk of the differential privacy literature has focused on the central model, in which user data is collected by a trusted aggregator who performs and publishes the results of a differentially private computation [11].
google, Apple, and Microsoft have instead chosen to operate in the local model [15,6,2,10], where users individually randomize their data on their own devices and send it to a potentially untrusted aggregator for analysis [18].
contrasting
NeurIPS
train_118
For instance, Catoni (2016) constructs a robust estimator of the Gram matrix of a random vector Z ∈ R d (as well as its covariance matrix) via estimating the quadratic form E Z, u 2 uniformly over all u 2 = 1.
the bounds are obtained under conditions more stringent than those required by our framework, and resulting estimators are difficult to evaluate in applications even for data of moderate dimension.
contrasting
NeurIPS
train_119
This may superficially seem much too weak.
this condition turns out to be equivalent to boostability.
contrasting
NeurIPS
train_120
• Averaging between the parameter vectors of k computers reduces variance by O(k − 1 2 ) similar to the result of [7].
it does not reduce bias (this is where [7] falls short).
contrasting
NeurIPS
train_121
During the training phase Gaussian noise can take negative values, so the input to the following layer can be of arbitrary sign.
during the testing phase noise ✓ is equal to 1, so the input to the following layer is non-negative with many popular non-linearities (e.g.
contrasting
NeurIPS
train_122
Therefore, in terms of the considered replica symmetric ansatz, a complete solution of the problem seems to be easily obtainable; unfortunately, it is not.
this set of equations ( 15) may be solved numerically for general β, K, and C. there exists an analytical solution of this equations.
contrasting
NeurIPS
train_123
These methods scale to large datasets by using noisy gradients calculated using a mini-batch or subset of the dataset.
the high variance inherent in these noisy gradients degrades performance and leads to slower mixing.
contrasting
NeurIPS
train_124
This symmetry is generated by a reflection operator, say the function m : R 3 → R 3 that flips the first axis: If S is a shape of a bilaterally-symmetric object, no matter how we align S to the symmetry plane, in general m[S] = S due to object deformations.
we can expect m[S] to still be a valid shape for the object.
contrasting
NeurIPS
train_125
While a uni-modal distribution with high variance can also produce both low and high values for the probability β gk , it will also produce intermediate values.
draws from the bi-modal distribution will have a clear gap between low and high values.
contrasting
NeurIPS
train_126
In the CIS (Random) model, the first few and the last few levels had little effect on perplexity and the medium-depth levels accounted for most of perplexity reduction.
in the CIS (LearnedRI) model, the effect of a level on perplexity decreased with level depth, with the first few levels reducing perplexity the most, which is a consequence of the greedy nature of the tree-learning algorithm.
contrasting
NeurIPS
train_127
at least 1 − δ, we have Such uniform convergence results are fairly common for decomposable loss functions such as the squared loss, logistic loss etc.
the same is not true for non-decomposable loss functions barring a few exceptions [17,10].
contrasting
NeurIPS
train_128
In Figure 5, models trained with block_size = 1 and block_size = 7 are both robust with block_size = 1 applied during inference.
the performance of model trained with block_size = 1 reduced more quickly with decreasing keep_prob when applying block_size = 7 during inference.
contrasting
NeurIPS
train_129
More recently, [12] proposes an approximate objective for structural SVMs that leads to an algorithm considerably faster than DLPW on problems requiring expensive inference.
the The contribution of this work is twofold.
contrasting
NeurIPS
train_130
Therefore, the terminal state ¤ @ is an absorbing state of the finite Markov chain.
the above analysis shows that ¤ @ essentially is composed of multiple absorbing states.
contrasting
NeurIPS
train_131
2, are applicable only for special-structured small networks.
common networks used in evaluations of defense methods are wide, which makes prior methods computationally intractable and complicated, which makes some prior methods inapplicable.
contrasting
NeurIPS
train_132
Oddly, in our analysis of the strongly convex case, the accelerated method is less sensitive to errors than the basic method.
unlike the basic method, the accelerated method requires knowing µ in addition to L. If µ is misspecified, then the convergence rate of the accelerated method may be slower than the basic method.
contrasting
NeurIPS
train_133
by removing whole neurons, filters or layers.
non-adaptive regularization techniques require tuning of a huge number of hyperparameters that makes it difficult to apply in practice.
contrasting
NeurIPS
train_134
Finally, note that the above analysis fixes α = O(1/T ), β = 1 − α, but in practice FastRNN learns α, β (which is similar to performing cross-validation on α, β).
interestingly, across datasets the learnt α, β values indeed display a similar scaling wrt T for large T (see Figure 2).
contrasting
NeurIPS
train_135
The first source of difficulty is adjusting the notion of effective rank (which the algorithm needs to compute) to compensate for the uncertainty in the knowledge of the eigenvalues of G. A further problematic issue arises because we want to measure the smoothness of f 0 along the eigendirections of G, and so we need to control the convergence of the eigenvectors, given that G converges to G in spectral norm.
when two eigenvalues of G are close, then the corresponding eigenvectors in the estimated matrix G are strongly affected by the stochastic perturbation (a phenomenon known as hybridization or spectral leaking in matrix perturbation theory, see [1,Section 2]).
contrasting
NeurIPS
train_136
As shown in Figure 2(a)-(f), elements of M with large values mainly distribute on the upper-left borders if the type of the blur kernel is Gaussian.
elements of M with large values mainly distribute on the diagonal if the input is a motion kernel as shown in Figure 2(g)-(l).
contrasting
NeurIPS
train_137
Several papers used multi-agent reinforcement learning [17,18,19] and planning [20,21,22,23] to generate cooperation in this setting.
this approach has not yet demonstrated robust cooperation in games with more than two players, which is often observed in human behavioral experiments.
contrasting
NeurIPS
train_138
We say a communication protocol is non-interactive if a message broadcasted by one party does not depend on the messages broadcasted by other parties.
interactive protocols allow the messages at any stage of the communication to depend on all the previous messages.
contrasting
NeurIPS
train_139
This makes the tree traversal unpredictable leading to trivial worst-case runtime guarantees.
locality-sensitive hashing [10] based methods approach search in a different way.
contrasting
NeurIPS
train_140
We have focussed on product and sigmoidal units as nonlinear computing elements.
the construction presented here is generic.
contrasting
NeurIPS
train_141
They therefore have typically wider applicability since they are not tied to any particular classifier family.
wrappers make the classifier an integral part of their operation, repeatedly invoking it to evaluate each of a sequence of feature subsets, and selecting the subset that results in minimum estimated classification error (for that particular classifier).
contrasting
NeurIPS
train_142
First order methods are particularly attractive in such problems as they typically enjoy computational complexity linear in the input size.
the convergence of these methods crucially depends on the geometry of the data; for instance, running the same algorithm on a rotated set of examples can return vastly inferior results.
contrasting
NeurIPS
train_143
It is known that stress can affect short-and long-term memory by modulating plasticity through stress hormones and neuromodulators [1,2,3,6].
there is no integrative model that would accurately predict and explain differential effects of acute stress.
contrasting
NeurIPS
train_144
In such a case, one could resort to within-image methods to further reduce the entropy.
there is a risk that such methods will remove components that actually represent smooth gradations in the anatomy.
contrasting
NeurIPS
train_145
These include one-to-one, many-to-one and oneto-many mappings.
existing environments (e.g., OpenAI Gym [6] and Universe [33]) wrap one game in one Python interface, which makes it cumbersome to change topologies.
contrasting
NeurIPS
train_146
In Section 4 we will establish the theoretical results on the COCA estimator and will show that it can estimate the latent true dominant eigenvector θ 1 in a fast rate and can achieve feature selection consistency.
we provide another model inspired from the classical PCA method, where we wish to estimate the leading eigenvector of the latent covariance matrix.
contrasting
NeurIPS
train_147
Since both q * (u) and ∂L ∂θ/z 1:M depend linearly on q(x) via sufficient statistics that contain a summation over all elements in the state trajectory, we can obtain unbiased estimates of these sufficient statistics by using one or multiple segments of the sequence that are sampled uniformly at random.
obtaining q(x) also requires a time complexity of O(T ).
contrasting
NeurIPS
train_148
The negative KL divergence term in Equation ( 5) tries to close the gap between two pipelines, and one could consider allocating more weights on the negative KL term of an objective function to mitigate the discrepancy in encoding of latent variables at training and testing, i.e., −(1 + β)KL (q φ (z|x, y) p θ (z|x)) with β ≥ 0.
we found this approach ineffective in our experiments.
contrasting
NeurIPS
train_149
[19,20] -and other related work [6] -assume that each agent votes for a single alternative.
it is potentially possible to design agents that generate a ranking of multiple alternatives, calling for a principled way to harness this additional information.
contrasting
NeurIPS
train_150
In these experiments we let both Pegasos and FOBOS employ a projection after each gradient step into a 2-norm ball containing w ⋆ (see [14]).
in the experiment corresponding to the rightmost plot of Fig.
contrasting
NeurIPS
train_151
However, none of the existing PID information components described above fits yet the notion of intersection information, as none of them quantifies the part of sensory information I(S : R) carried by neural activity R that also informs the choice C. The PID quantity that seems to be closest to this notion is the redundant information that S and R share about C, SI(C : {S; R}).
previous works pointed out the subtle possibility that even two statistically independent variables (here, S and R) can share information about a third variable (here, C) [23,27].
contrasting
NeurIPS
train_152
Ideally, we would want an algorithm that could learn each sub-task and combine them into the complete task, rather than only be able to learn single monolithic tasks.
for many classes of quantitative rewards, "combining" rewards remains an ad-hoc procedure.
contrasting
NeurIPS
train_153
We further utilize the dependence between the algorithm input and output and the stochasticity of the algorithm, and we give results for more general processes.
we only obtain upper bounds in this paper.
contrasting
NeurIPS
train_154
Specifically, a DBN typically refers to a Bayes' net in which the variables have an explicit notion of time, and past observations have some influence on estimates about the present and future.
marginalizing over unobserved variables at time t−1 typically produces increased complexity in the the model of variables at time t. in both [6] and this work, the emphasis is on performing inference with current information only, and efficiency is obtained by leveraging the similarity between the previous and newly updated models.
contrasting
NeurIPS
train_155
We can see that the Bernstein upper bound is much tighter than the TRW upper bound, although at the cost of turning a deterministic bound into a (1 − δ) probabilistic bound.
the Bernstein interval fails to report a meaningful lower bound when the model is difficult (σ p ≈ ±0.5), because n = 10 4 is small relative to the difficulty of the model.
contrasting
NeurIPS
train_156
First, the reconstruction error alone is a valid criterion only if one really plans to perform dimensionality reduction of the data and stop there.
pCA is often used merely as a preprocessing step and the projected data is then submitted to further processing (which could be classification, regression or something else).
contrasting
NeurIPS
train_157
For instance, for a user uploading a new song, tagging it as 'Rock' may be informative, but will probably only contribute marginally to the song's traffic, as the competition for popularity under this tag can be fierce.
choosing a unique or obscure tag may be appealing, but will not help much either.
contrasting
NeurIPS
train_158
3 Due to the implicit nature of the nonlinear mapping, we can not directly evaluate w ij .
we only need its dot product with the transformed input vectors Φ(x).
contrasting
NeurIPS
train_159
We assume that each node i can join or leave a given group k according to a Markov model.
since each node can join multiple groups independently, we naturally consider factorial hidden Markov models (FHMM) [8], where latent group membership of each node independently evolves over time.
contrasting
NeurIPS
train_160
SPTM has shown impressive results on image-based navigation.
causal InfoGAN's parametric approach of learning a compact, model for planning has the potential to scale up to more complex problems, in which the increasing amount of data required would make the nonparametric SPTM approach difficult to apply.
contrasting
NeurIPS
train_161
The trend suggests that, for very large J, close to K measurements per signal should suffice.
with independent CS reconstruction, for perfect reconstruction of all signals the number of measurements per sensor increases as a function of J.
contrasting
NeurIPS
train_162
In contrast, using the SAG iterations from the beginning gives the same rate but with a constant proportional to n. Note that this bound is obtained when initializing all y i to zero after the SG phase.
1 in our experiments we do not use the SG initialization but rather use a minor variant of SAG (discussed in the next section), which appears more difficult to analyze but which gives better performance.
contrasting
NeurIPS
train_163
When the cardinality of X S increases under infill asymptotics [14, §3.3], This is the limit for the posterior variance at any test location for task T , if one has training data only for the secondary task S. This is because a correlation of ρ between the tasks prevents any training location for task S from having correlation higher than ρ with a test location for task T . Suppose correlations in the input-space are given by an isotropic covariance function k x (|x − x |).
if we translate correlations into distances between data locations, then any training location from task S is beyond a certain radius from any test location for task T . a training location from task T may lay arbitrarily close to a test location for task T , subject to the constraints of noise.
contrasting
NeurIPS
train_164
This low-rank structure carries through for purely linear statistics (such as sample means).
non-linearities in the test statistic calculation, e.g., normalizing by pooled variances, will contribute a long tail of eigenvalues, and so we require that this long tail will either decay rapidly, or that it does not overlap with the dominant eigenvalues.
contrasting
NeurIPS
train_165
A straightforward and popular approach to optimize L is to use stochastic gradient methods [24,26,28,35].
natural-gradients are preferable when optimizing the parameters of a distribution [3,15,18].
contrasting
NeurIPS
train_166
This is because these methods solve the dual optimization problem for a given performance measure; hence the intermediate models do not necessarily yield good accuracies.
(stochastic) gradient based methods directly offer progress in terms of the primal optimization problem, and hence provide good intermediate solutions as well.
contrasting
NeurIPS
train_167
Hence, R(s, t) global should be the most important part in this decomposition.
in case of the standard resistance distance the contribution of the global part becomes negligible as n !
contrasting
NeurIPS
train_168
We denote the weights of this linear combination as W 2 < NVOX⇥NCELLS , and u t as a vector of size N VOX representing predicted neural activity at each voxel at time t. Thus, For training the model, the predicted neural activity is compared with the actual activity recorded by the scanner.
neural activity is not instantly reflected in the intensity recorded by the scanner, but is delayed according to the haemodynamic response function (HRF; Figure S3).
contrasting
NeurIPS
train_169
This algorithm extracts event-related desynchronization (ERD) effects, i.e., event-related attenuations in some frequency bands, e.g., µ/β -rhythm.
the CSP algorithm can be used more generally, e.g., in [11] a suitable modification to movementrelated potentials was presented.
contrasting
NeurIPS
train_170
It seems the learning algorithm is suffering from the saddle point problem [8].
the hint may provide an effective guidance to avoid the problem by directly having a guidance at an intermediate layer.
contrasting
NeurIPS
train_171
As shown in [28] these frameworks have connections to learning policies in reinforcement learning.
the policies are learned over incomplete configurations.
contrasting
NeurIPS
train_172
It is conceivable that one may construct a probabilistic query strategy analogous to the Replicated Bisection strategy by replicating queries in L pre-determined sub-intervals.
it appears challenging to prove that such replications preserve privacy, and still more difficult to see how one may obtain a matching query complexity lower bound in the noisy setting.
contrasting
NeurIPS
train_173
Since all histories are of interest, bridging tests are single observations, and T E is exactly equivalent to the original system.
note that in order to make the predictions of interest, one must only know whether the ball is neighboring or on the pixel.
contrasting
NeurIPS
train_174
Knowledge of a protein's unique conformation provides insight into the mechanisms by which a protein acts.
no algorithm exists that accurately maps sequence to structure, and one is forced to use "wet" laboratory methods to elucidate the structure of proteins.
contrasting
NeurIPS
train_175
The synthetic data sets allow for a controlled evaluation, and for generating training and testing data sets of any desired size.
the data is generated from a distribution that indeed has only a single hidden variable.
contrasting
NeurIPS
train_176
This rate can be interpreted as the rate at which unlabeled examples estimate the parameters of the best fitting model and rate at which labeled examples correctly label these estimated decision regions.
for small u estimation of the decision regions will be bad and and corresponding l * u > l * .
contrasting
NeurIPS
train_177
Moreover, among those few exceptions that do not use projections onto X i when Π Xi is not easy to compute, only [15,16] can handle agent-specific constraints without assuming global knowledge of the constraints by all agents.
no rate results in terms of suboptimality, local infeasibility, and consensus violation exist for the primaldual distributed methods in [15,16] when implemented for the agent-specific conic constraint sets X i = {x : A i x − b i ∈ K i } studied in this paper.
contrasting
NeurIPS
train_178
As an interesting feature, LWR can regress on non-stationary functions, a beneficial property, for instance, in control problems.
it does not provide a proper generative model for function values, and existing algorithms have a variety of manual tuning parameters that strongly influence bias, variance and learning speed of the results.
contrasting
NeurIPS
train_179
If this is the case then PickyAdaBoost abstains in that round and does not include h t into the combined hypothesis it is constructing.
(Note that consequently the distribution for the next round of boosting will also be D .) if the current base classifier has advantage γ where |γ| ≥ γ, then PickyAd-aBoost proceeds to use the weak hypothesis just like AdaBoost, i.e.
contrasting
NeurIPS
train_180
Use of kernels together with high-order features may lead to further improvements.
we note that the advantage of the higher order features may become less substantial as the observations become more powerful in distinguishing the classes.
contrasting
NeurIPS
train_181
Recent work relied on an additional maximum-likelihood estimation (MLE) stage merged with a spectral method to attain good estimates in ∞ error to achieve the limit for the pairwise model.
although it is valid in slightly restricted regimes, our result demonstrates a spectral method alone to be sufficient for the general M -wise model.
contrasting
NeurIPS
train_182
The cost function in equation ( 5) is the square of the Frobenius norm of the difference between the empirical matrix ¢ and the fit kernel ¡ c . The use of the Frobenius norm is similar to the Ordinary Least Squares technique of fitting variogram parameters in geostatistics [7].
instead of summing variogram estimates within spatial bins, we form covariance estimates over all meta-training data pairs ¥ .
contrasting
NeurIPS
train_183
For example, in the case (d = 3, k = 1) the free parameters are Z12 and Z13, which define a coordinate system for the sphere.
as a function of U, the integrand is simply 1 The density is maximized when U contains the top k eigenvectors of S . the density is unchanged if we negate any column of U.
contrasting
NeurIPS
train_184
As a general remark, it appears that there is no globally optimal α parameter across datasets.
the reported training and test MNLL curves appear to be in agreement regarding the optimal choice for α .
contrasting
NeurIPS
train_185
GAN can be trained efficiently via back-propagation through the nonlinear function of the generator, which typically requires the data to be continuous (e.g., images).
the discrete nature of text renders the model non-differentiable, hindering use of GAN in natural language processing tasks.
contrasting
NeurIPS
train_186
Quantitatively, our algorithm achieves a higher IoU over these methods (MarrNet 0.39 vs. DRC 0.34).
we find the IoU metric sub-optimal for three reasons.
contrasting
NeurIPS
train_187
For real datasets, such as image histograms, where minwise sampling is popular [13], the value of this sparsity is of the order of 0.02-0.08 (see Section 4.2) leading to 1 s x ≈ 13 − 50.
the number of non-zeros is around half million.
contrasting
NeurIPS
train_188
When from the observations, a red node w ∈ V (r) is connected to at most a single green node, i.e., if v∈V (g) A vw ≤ 1, this red node is useless in the classification of green nodes.
when a red node is connected to two green nodes, say v 1 and v 2 (A v1w = 1 = A v2w ), we may infer that the green nodes v 1 and v 2 are likely to be in the same cluster.
contrasting
NeurIPS
train_189
When • R and • C are both Euclidean norms, this oracle can be efficiently computed via the leading left and right singular vector pair.
for most other interesting cases like low rank tensors, such an oracle is intractable [29].
contrasting
NeurIPS
train_190
Control was historically among the earliest applications of neural networks, but the recent surge in performance has been in computer vision, speech recognition and other classification problems that arise in artificial intelligence and machine learning, where large datasets are available.
the data needed to learn neural network controllers is much harder to obtain, and in the case of imaginary characters and novel robots we have to synthesize the training data ourselves (via trajectory optimization).
contrasting
NeurIPS
train_191
The spike counts are conditionally Poisson distributed given a vector of parameters w and time-dependent vector of covariates x t The log-likelihood of w given the vector of all observed spike counts y is where we have dropped terms constant in w and the t-th row of the design matrix X is x t . The methods in this paper apply both when using the canonical log link function such that the nonlinearity is f (x) = exp(x) and when using alternative nonlinearities.
with moderate amounts of data, first or second order optimization techniques can be used to quickly find point estimates of the parameters w. inference can be prohibitive for large datasets, as each evaluation of the log-likelihood requires passing through the entire design matrix.
contrasting
NeurIPS
train_192
In RVM, the "dictionary" used for signal representation is the collection of values from the "kernel function".
sRSC roots in the standard sparse representation and recent developments of harmonic analysis, such as curvelet, bandlet, contourlet transforms that show excellent properties in signal modelling.
contrasting
NeurIPS
train_193
For example, any F . With this scenario in mind, it is unsurprising that low-rank approximation guarantees fail as an accuracy measure in practice.
we ran a standard sketch-and-solve approximate SVD algorithm (see Section 3) on SNAP/AMAZON0302, an Amazon product co-purchasing dataset [22,23], and achieved very good low-rank approximation error in both norms for k = 30: the approximate principal components given by Z are of significantly lower quality than A's true singular vectors (see Figure 1).
contrasting
NeurIPS
train_194
Note that this is different from Figure 2(b), as now we take the effect of random sampling and SVD into account.
the trend in both figures are the same: SMP-PCA always outperforms SVD( e A T e B) and can be arbitrarily better as ✓ goes to zero.
contrasting
NeurIPS
train_195
However, their algorithms are prone to local optimal issues and the recovered tensor might be very different from its true value.
our main results, Theorem 1 and Theorem 2, guarantee that a convex program can exactly or accurately recover the pairwise interaction tensors from O(nr log 2 (n)) observations.
contrasting
NeurIPS
train_196
The assumption refers to the fact that the value of a slate is a linear function of its feature representation.
note that this linear dependence is allowed to be completely different across contexts, because we make no assumptions on how φ x depends on x, and in fact our method does not even attempt to accurately estimate φ x .
contrasting
NeurIPS
train_197
Unfortunately, β is typically unknown to the learner.
using the tools to design self-tuning algorithms, e.g.
contrasting
NeurIPS
train_198
Table 1 shows that for the first three sets of data (Gaussians, Circles, AI) maximum margin and spectral clustering obtained identical small error rates, which were in turn significantly smaller than those obtained by k-means.
maximum margin clustering demonstrates a substantial advantage on the fourth data set (Joined Circles) over both spectral and k-means clustering.
contrasting
NeurIPS
train_199
Because according to Algorithm 1 in [11], the output of input Θ weighted by the scalars s = Φ g can be approximated to first glance, the scalars s is totally erased by BN in this mathmatical process.
the de facto operation of a convolutional module has a process order to aggregate the features.
contrasting
NeurIPS