id
stringlengths
7
12
sentence1
stringlengths
5
1.44k
sentence2
stringlengths
6
2.06k
label
stringclasses
4 values
domain
stringclasses
5 values
train_700
One important remark about the k-gram conditional swap regret is that it is a random quantity that depends on the particular sequence of actions played.
a natural deterministic alternative would be of the form: by taking the expectation of Reg �k ��� T ) with respect to a T −1 � a T2 � .
contrasting
NeurIPS
train_701
scenario and adversarial on-line learning setting.
the key component of our learning guarantees is the discrepancy term .
contrasting
NeurIPS
train_702
First, the flow of information between tasks should not be dictated by a rigid diagram that reflects the relationship between the tasks themselves, such as hierarchical or temporal dependencies.
information should be exchanged across tasks whenever useful.
contrasting
NeurIPS
train_703
Finally, we simulate profile face images in various poses with pre-defined yaw angles.
the performance of the simulator decreases dramatically under large poses (e.g., yaw angles ) due to artifacts and severe texture losses, misleading the network to overfit to fake information only presented in synthetic images and fail to generalize well on real data.
contrasting
NeurIPS
train_704
Lloyd's algorithm can be parallelized in the MapReduce framework (Zhao et al., 2009) or even replaced by fast stochastic optimization techniques such as online or mini-batch k-Means (Bottou & Bengio, 1994;Sculley, 2010).
the seeding step requires k inherently sequential passes through the data, making it impractical even for moderate k. This highlights the need for a fast and scalable seeding algorithm.
contrasting
NeurIPS
train_705
In cases where the expectation is intractable to compute exactly, it can be approximated by a Monte Carlo estimate: where {v n : 1 ≤ n ≤ N } are a set of samples of X.
obtaining a good estimate will require many samples if f (X D ) has high variance.
contrasting
NeurIPS
train_706
In the first paper [1], independence between blocks and delays is avoided.
they require a step size that diminishes at 1/k and that the sequence of iterate is bounded (which in general may not be true).
contrasting
NeurIPS
train_707
Although the frustration in these networks is relatively mild, REC-BP did not converge in any of these cases.
rEC-I compensations were relatively well behaved, and produced monotonically decreasing upper bounds on the MAP value; see Figure 1 (center).
contrasting
NeurIPS
train_708
Note BPM [31]'s runtime is significantly more expensive than other methods (empirically an order higher than ours using the public source code) as it simultaneously seeks multiple paths for the best score (though accuracy is similar to ours).
our method focus on one path no matter the path following or multiplicative strategy is used.
contrasting
NeurIPS
train_709
Performance is convincing also for stripes and dots, especially since these attributes have generic appearance, and hence must be recognized based only on geometry and layout.
colors enjoy a very distinctive, specific appearance.
contrasting
NeurIPS
train_710
The problem of finding the M MPCs has been successfully solved within the junction tree (JT) framework.
to the best of our knowledge, there has been no equivalent solution when building a junction tree is infeasible.
contrasting
NeurIPS
train_711
In * Equal contributions particular, the popular Mahalanobis metric weights each feature (and their interactions) additively when calculating distances.
similarity can arise from a complex aggregation of comparing data instances on multiple subsets of features, to which we refer as latent components.
contrasting
NeurIPS
train_712
But to the best of our knowledge, no previous deep learning approaches on general graphs preserve the order of edges.
we propose a novel way of graph embedding that can preserve the information of edge ordering, and demonstrate its effectiveness for premise selection.
contrasting
NeurIPS
train_713
( 1), it may be tempting to treat the environment s as a discrete latent variable and learn it through amortised variational inference.
we found that in the continual learning scenario this is not a viable strategy.
contrasting
NeurIPS
train_714
In essence, these methods consider explicit factorization of a coupled tensor T ∈ R n1×n2×n3 and a matrix , respectively, with a common rank R and shared components a i , i = 1, . . . , R. Many variations of factorization models for coupled completion models have been proposed based on CP decomposition with shared and unshared components (Acar et al., 2014), Tucker decomposition (Ermis et al., 2015), and non-negative factorization (Ermis et al., 2015).
due to factorization, these coupled completion models are non-convex that lead to local optimal solutions.
contrasting
NeurIPS
train_715
Feedback loops are prevalent in animal behaviour, where they are normally called a "reflex".
the reflex has the disadvantage of always being too late.
contrasting
NeurIPS
train_716
In [4], the authors propose to minimize the cut between samples and features, which is equivalent to conducting spectral clustering on the bipartite graph.
in this method, since the original graph doesn't display an explicit cluster structure, it still calls for the post-processing step like K-mean clustering to obtain the final clustering indicators, which may not be optimal.
contrasting
NeurIPS
train_717
@ , 3 © ¡ ¦¢ at the same initial belief states used in [12], but without using them to tailor the policy.
pBVI achieved expected returns of ¡ © ¡ ¨ § , 3 © § ¦ and 3 © ¨ in ¦ ¢ )¦ , ¡ ¨¢ ¨¢ ©¦ and ¢ ©3 ©¦ with policies of ¤£ 3 , ¢ ¢ and ¥ § linear segments tailored to those initial belief states.
contrasting
NeurIPS
train_718
This demonstrates the main limitation of our approach to randomly sample 'seed' patches: it does not scale to arbitrarily large amounts of unlabeled data.
we do not see this as a fundamental restriction and discuss possible solutions in Section 5 . saturates at around 100 samples.
contrasting
NeurIPS
train_719
This is especially true for the move-to-front rules.
the cue orders resulting from all learning rules but the validity learning rule do not correlate or correlate negatively with the validity cue order, and even the correlations of the cue orders resulting from the validity learning rule after 100 decisions only reach an average r = .12.
contrasting
NeurIPS
train_720
This leads to a branching law with exponent of 5/2.
the presence of reflections from branch points and active conductances is likely to complicate the picture.
contrasting
NeurIPS
train_721
Note that p∈P f x (p) − q∈S w q f x (q) does not equal p∈P cost(p, x) − q∈S w q cost(q, x).
it equals the difference between p∈P cost(p, x) and a weighted cost of the sampled points and the centers in the approximation solution.
contrasting
NeurIPS
train_722
The notion of universal equivalence might appear quite restrictive because condition (10) must hold for any underlying probability measure P (X, Y ).
this is precisely what we need when P (X, Y is unknown.
contrasting
NeurIPS
train_723
Valid causal inference in observational studies often requires controlling for confounders.
in practice measurements of confounders may be noisy, and can lead to biased estimates of causal effects.
contrasting
NeurIPS
train_724
In fact, alternative optimization is popular in generative adversarial networks [8], in which a generator and discriminator get alternatively updated.
alternative optimization has the following shortcomings.
contrasting
NeurIPS
train_725
Concretely, for decomposing an m×n matrix, say with m ≤ n, the best specialized implementations (typically first-order methods) have a per-iteration complexity of O m 2 n , and require O(1/ ) number of iterations to achieve an error of .
the usual PCA, which carries out a rankr approximation of the input matrix, has O(rmn) complexity per iteration -drastically smaller when r is much smaller than m, n. Moreover, PCA requires exponentially fewer iterations for convergence: an accuracy is achieved with only O (log(1/ )) iterations (assuming constant gap in singular values).
contrasting
NeurIPS
train_726
When it does succeed, it produces results that are comparable with LELVM, although somewhat less accurate visually.
even then GPLVM's latent space consists of continuous chunks spread apart and offset from each other; GPLVM has no incentive to place nearby two xs mapping to the same y.
contrasting
NeurIPS
train_727
The famous Newton step corresponds to a change of variables D 1 2 = H 1 2 which makes the new Hessian perfectly conditioned.
a change of variables only exists 2 when the Hessian H is positive semi-definite.
contrasting
NeurIPS
train_728
Hence, the likelihood function takes the right shape around the training samples, but not necessarily everywhere.
the code vector in an RBM is binary and noisy, and one may wonder whether this does not have the effect of surreptitiously limiting the information content of the code, thereby further minimizing the log partition function as a bonus.
contrasting
NeurIPS
train_729
In order to do this, we make regions of high density larger, and we make regions of low density smaller.
the Jacobi metric does not completely override the old notion of distance and scale; the Jacobi metric provides a compromise between physical distance and density of the probability measure.
contrasting
NeurIPS
train_730
Nonnegative Matrix Factorization (NMF) is a promising relaxation technique for clustering analysis.
conventional NMF methods that directly approximate the pairwise similarities using the least square error often yield mediocre performance for data in curved manifolds because they can capture only the immediate similarities between data samples.
contrasting
NeurIPS
train_731
[39] introduce an end-to-end imitation learning framework that learns to drive entirely from visual information, and test their approach on real-world scenarios.
their method uses behavior cloning by performing supervised learning over the state-action pairs, which is well-known to generalize poorly to more sophisticated tasks, such as changing lanes or passing vehicles.
contrasting
NeurIPS
train_732
Below, we show that a global solution of , the global solution is unique and diagonal.
when C A C B is degenerate, the global solutions are not unique because arbitrary rotation in the degenerate subspace is possible without changing the free energy.
contrasting
NeurIPS
train_733
When predicting with smaller than the one used for learning the results are marginally worse than when predicting with the same .
when predicting with larger , the results get significantly worse, e.g., learning with = 0.01 and predicting with = 1 results in 10 errors, and only 2 when = 0.01.
contrasting
NeurIPS
train_734
We do not consider features in this paper.
since most of the online game matching data has features associated with each game, it is our future work to explore this area.
contrasting
NeurIPS
train_735
It is possible to recursively insert minimum variance unbiased baseline terms into these expectations in order to reduce the variance on the baseline estimates.
the number of baseline parameters being estimated increases rapidly in this recursive process.
contrasting
NeurIPS
train_736
The Kendall's τ distance is a natural discrepancy measure when permutations are interpreted as rankings and is thus the most widely used in the preference learning literature.
the Hamming distance is particularly used when permutations represent matching of bipartite graphs and is thus also very popular (see Fathony et al.
contrasting
NeurIPS
train_737
Furthermore, the PY model's usage of thresholded Gaussian processes leads to a complex likelihood function, for which inference is a significant challenge.
ddCRP inference is carried out through a straightforward sampling algorithm, 5 and thus may provide a simpler foundation for building rich models of visual scenes.
contrasting
NeurIPS
train_738
We do not consider these approaches in this work, in part due to the fact that the bounded-delay assumptions associated with most asynchronous schemes limit fault tolerance.
it would be interesting to further explore the differences and connections between asynchronous methods and approximation-based, synchronous methods like MOCHA in future work.
contrasting
NeurIPS
train_739
For instance, we know that the sample complexity can be much smaller than the radius of the support of X, if the average norm E[ X 2 ] is small.
e[ X 2 ] is also not a precise characterization of the sample complexity, for instance in low dimensions.
contrasting
NeurIPS
train_740
2 While Theorem 1 is positive for finding variable settings that satisfy sentences, unsatisfiable sentences remain problematic when we are unsure that there exists γ > 0 or if we have an incorrect setting of f . We are unaware of an efficient method to determine all y ij for visited nodes in proofs of unsatisfiable sentences.
we expect that similar substructures will exist in satisfiable and unsatisfiable sentences resulting from the same application.
contrasting
NeurIPS
train_741
Early classification should become more effective once MI-RNN identifies instances that contain true and tight class signatures as these signatures are unique to that class.
since the two methods are trained separately, a naïve combination ends up reducing the accuracy significantly in many cases (see Figure 4).
contrasting
NeurIPS
train_742
Therefore, each iteration takes O(h 2 |L|) time, and the total time to obtain an optimal k-leaf-sparse solution is O(h 2 k|L|).
a brute-force search will take |L| k time.
contrasting
NeurIPS
train_743
The issue, however, is that the large terms are extremely volatile and could dominate all other components in an undesired way.
tWF makes use of only gradient components of typical sizes, which slightly increases the bias but remarkably reduces the variance of the descent direction.
contrasting
NeurIPS
train_744
Recent interest in such models has surged due to their biological plausibility and accuracy for characterizing early sensory responses.
fitting poses a difficult computational challenge due to the expense of evaluating the log-likelihood and the ubiquity of local optima.
contrasting
NeurIPS
train_745
For GMM, it is still possible to use a matched linear averaging which matches the mixture components of the different local models by minimizing a symmetric KL divergence; the same idea can be used on our linear control variates method to make it applicable to GMM.
because the parameters of PPCA-based models are unidentifiable up to arbitrary orthonormal transforms, linear averaging and linear control variates can no longer be applied easily.
contrasting
NeurIPS
train_746
These works focus on a multi-task setting in which meta-learning takes place on a distribution of training tasks, to facilitate fast adaptation on an unseen test task.
our work emphasises the (arguably) more fundamental problem of meta-learning within a single task.
contrasting
NeurIPS
train_747
For the convergence of this instance of the algorithm, it is required that all the states and actions are visited infinitely many times, which makes the analysis slightly more complicated.
given a generative model, the algorithm may be also formulated in a synchronous fashion, in which we first generate a next state y ∼ P (•|x, a) for each state-action pair (x, a), and then update the action-values of all the stateaction pairs using these samples.
contrasting
NeurIPS
train_748
Nevertheless, the emphasis has been on asymptotic analysis, characterizing the rates of convergence of test statistics under null hypotheses, as the number of samples tends to infinity.
we wish to study the following problem in the small sample regime: ⇧(C, "): Given a family of distributions C, some " > 0, and sample access to an unknown distribution p over a discrete support, how many samples are required to distinguish between p 2 C versus d TV (p, C) > "?
contrasting
NeurIPS
train_749
Arguably, one solution is to reduce the dimensionality of such ultra high dimensional data while preserving the original data distribution.
take ImageNet dataset as an example.
contrasting
NeurIPS
train_750
The GMM seems to support their intuition that learning separate linear subspace models for flat vs motion boundary is a good idea.
unlike the work of Fleet et al.
contrasting
NeurIPS
train_751
Since the distribution is unknown, the true error rate is not observable.
we can observe the empirical error rate, B ' C& D E0 F2 54 G (& ) H9 A A I ¦ QP ¦ SR I & T9 A .
contrasting
NeurIPS
train_752
SVGD has been applied to solve challenging inference problems in various domains; examples include Bayesian inference Feng et al., 2017), uncertainty quantification (Zhu & Zabaras, 2018), reinforcement learning (Liu et al., 2017;Haarnoja et al., 2017), learning deep probabilistic models Pu et al., 2017) and Bayesian meta learning (Feng et al., 2017;Kim et al., 2018).
the theoretical properties of SVGD are still largely unexplored.
contrasting
NeurIPS
train_753
Asynchronous-parallel algorithms have the potential to vastly speed up algorithms by eliminating costly synchronization.
our understanding of these algorithms is limited because the current convergence theory of asynchronous block coordinate descent algorithms is based on somewhat unrealistic assumptions.
contrasting
NeurIPS
train_754
Note that all settings fix b j = 1 since this yields the best rate as will be shown in Section 3.
in practice a reasonably large mini-batch size b j might be favorable due to the acceleration that could be achieved by vectorization; see Section 4 for more discussions on this point.
contrasting
NeurIPS
train_755
The RIP treats all possible K-sparse supports equally.
if we incorporate a probabilistic model on our signal supports and consider only the signal supports with the highest likelihoods, then we can potentially do much better in terms of the required number of measurements required for stable recovery.
contrasting
NeurIPS
train_756
A coarser but available alternative is to calculate the global Lipschitz constant.
prior work could provide only magnitudes of smaller certifications compared to the usual discretization of images even for small networks [29,24].
contrasting
NeurIPS
train_757
Labeled points participate in the information regularization in the same way as unlabeled points.
their conditionals have additional constraints, which incorporate the label information.
contrasting
NeurIPS
train_758
However, unlike them, it provides a principled framework for full Bayesian inference and can be used to determine how to trade off goodness-of-fit across summary statistics.
to the best of our knowledge, this potential has not been realised yet, and ABC approaches are not used for linking mechanistic models of neural dynamics with experimental data (for an exception, see [17]).
contrasting
NeurIPS
train_759
Thus, the solution above is communication-efficient only when λ is relatively large.
the situation immediately improves if we can use a without-replacement version of SVRG, which can easily be simulated with randomly partitioned data: The stochastic batches can now be simply subsets of each machine's data, which are statistically identical to sampling {f 1 (•), .
contrasting
NeurIPS
train_760
The surfaces and walls determine the stochastic dynamics of the world.
the agent also observes numerous other features in the environment.
contrasting
NeurIPS
train_761
Our framework, similar to the BicycleGAN, can be utilized to generate multiple realistic images for a single input, while does not require any supervision.
cycleGAN and UNIT learn one-to-one mappings as they learn only one domain-invariant latent code between the two modalities.
contrasting
NeurIPS
train_762
!, AQM-gen1Q-depA achieves a slight performance improvement over AQM-countQ-depA at 2-q (49.79% → 51.07%) outperforming the original deep SL method (46.8% in 5-q).
at 5-q, AQM-gen1Q performs slightly worse than AQM-countQ-depA (72.89% → 70.74%).
contrasting
NeurIPS
train_763
In fact, we are confident pseudo-counts may be used to prove similar results in non-tabular settings.
it may be difficult to provide theoretical guarantees about existing bonus-based intrinsic motivation approaches.
contrasting
NeurIPS
train_764
Although EM with power method [4] shares the same computational complexity as ours, there is no convergence guarantee for EM to the best of our knowledge.
we provide local convergence guarantee for our method.
contrasting
NeurIPS
train_765
[3], based on multiple kernel learning framework, further demonstrated that an additional text modality can improve the accuracy of SVMs on various object recognition tasks.
all of these approaches are discriminative by nature and cannot make use of large amounts of unlabeled data or deal easily with missing input modalities.
contrasting
NeurIPS
train_766
This is because some redundant attributes dominated the selection process and the attributes selected by comparing approaches had very unbalanced discrimination capability for different classes.
the attributes selected by our method have strong and similar discrimination capability for each class.
contrasting
NeurIPS
train_767
As expected, no method wins across all experiments.
the results show that the method that wins the most (out of the 9 options) is either the combination of SeLU and VCL or that of ELU and VCL.
contrasting
NeurIPS
train_768
For small values of a, our algorithm performs worse than the baseline of directly using θ 0 , likely due to finite-sample effects.
our algorithm is far more robust as a increases, and tracks the performance of an oracle that was trained on the same distribution as the test examples.
contrasting
NeurIPS
train_769
At first sight this may appear to be restrictive.
as we show in the supplementary material, one can construct Lasso problems using a Gaussian basis W m which lead to penalty parameter bounds ratios that converge in distribution to those of the Lasso problem in Eq.
contrasting
NeurIPS
train_770
Loewenstein & Seung (2006) demonstrated that matching behavior is a steady state of learning in neural networks if the synaptic weights change proportionally to the covariance between reward and neural activities.
their proof did not take into account the change in entire synaptic distributions.
contrasting
NeurIPS
train_771
We show that the quality of the tensor sketch does not depend on sparseness, uniform entry distribution, or any other properties of the input tensor.
previous works assume specific settings such as sparse tensors [24,8,16], or tensors having entries with similar magnitude [27].
contrasting
NeurIPS
train_772
[10]) has similar goals, but is technically quite distinct: the canonical problem in preference learning is to learn a ranking on distinct elements.
the problem we consider here is to predict the outcome of a continuous optimization problem as a function of varying constraints.
contrasting
NeurIPS
train_773
However, these methods reason about the image only on the global level using a single, fixed-sized representation from the top layer of a Convolutional Neural Network as a description for the entire image.
our model explicitly reasons about objects that make up a complex scene.
contrasting
NeurIPS
train_774
Methods exist for learning such systems from data [18,19]; these methods are able to handle multivariate target variables and models that repeat in the sequence.
they are consequently more complex and computationally intensive than the much simpler changepoint detection method we use, and they have not been used in the context of skill acquisition.
contrasting
NeurIPS
train_775
Quite a few papers studied alternative models, where the actions are endowed with a richer structure.
in the large majority of such papers, the feedback structure is the same as in the standard multi-armed bandits.
contrasting
NeurIPS
train_776
The framework presented here has some similarities with the very interesting and more explicitly physiological model proposed by Buonomano and colleagues [5,18], in which time is implicitly encoded in deterministic 2 neural networks through slow neuronal time constants.
temporal information in the network model is lost when there are stimulus-independent fluctuations in the network activity, and the network can only be used as a reliable timer when it starts from a fixed resting state, and if the stimulus is identical on every trial.
contrasting
NeurIPS
train_777
The process is asymptotically stationary if lim l!1 (l) = 0.
the most important property of the discrepancy is that, as shown later in Section 4, it can be estimated from data under some additional mild assumptions.
contrasting
NeurIPS
train_778
It can be seen that the PRFs look very different to the usual center-surrond structure of retinal ganglion.
one should keep in mind that it is really the space spanned by the PRFs that is relevant, and thus be careful when interpreting the actual filter shapes [ 1 5] .
contrasting
NeurIPS
train_779
Intuitively, imagine that we grow a ball of radius r around each sample point.
the union of these balls roughly captures the hidden domain at scale r. the topological structure of the union of these balls is captured by the so-called Čech complex, which mathematically is the nerve of this union of balls.
contrasting
NeurIPS
train_780
Additive noising introduces a product-form penalty depending on both and A 00 .
the full potential of artificial feature noising only emerges with dropout, which allows the penalty terms due to and A 00 to interact in a non-trivial way through the design matrix X (except for linear regression, in which all the noising schemes we consider collapse to ridge regression).
contrasting
NeurIPS
train_781
We demonstrate that under favorable conditions, we can construct logarithmic depth trees that have leaves with low label entropy.
the objective function at the nodes is challenging to optimize computationally.
contrasting
NeurIPS
train_782
As we only wish to model the hidden harmonic state given the melody, rather than construct a full generative model of the data, Conditional Random Fields (CRFs) [14] provide a related but alternative framework.
note that training such models (e.g.
contrasting
NeurIPS
train_783
In standard GP applications, one has access to a single realisation of data y, and performs kernel learning by optimizing the marginal likelihood of the data with respect to covariance function hyperparameters θ (supplement).
with only a single realisation of data we are highly constrained in our ability to learn an expressive kernel function -requiring us to make strong assumptions, such as RBF covariances, to extract useful information from the data.
contrasting
NeurIPS
train_784
Using the above analyses, any γ that successfully disentangles e 1 and e 2 should be sufficient.
α and β can be selected by starting with α ≫ β and gradually increasing β as long as the performance of the prediction task improves.
contrasting
NeurIPS
train_785
Here, each event defines a relationship, e.g., whether in the event two entities' group(s) behave the same way or not.
in our model a relation may also have multiple attributes.
contrasting
NeurIPS
train_786
In this decomposition approach, the features specific to a topic, not a label, are regarded as important features.
the approach may result in inefficient learning as we will explain in the following example.
contrasting
NeurIPS
train_787
(2013) assumes that the buyer is fully strategic whereas we only require the buyer to be ✏-strategic.
the authors assume that the distribution satisfies a Lipchitz condition which technically allows them to bound the number of lies in the same way as in Proposition 2.
contrasting
NeurIPS
train_788
Matrix factorization (MF) collaborative filtering is an effective and widely used method in recommendation systems.
the problem of finding an optimal trade-off between exploration and exploitation (otherwise known as the bandit problem), a crucial problem in collaborative filtering from cold-start, has not been previously addressed.
contrasting
NeurIPS
train_789
[13,21] proposed object-level saliency maps to explain RL policies in visual domains by measuring the impact on action choices when we replace an object in the input image by the surrounding background.
templates for each object must be hand-engineered.
contrasting
NeurIPS
train_790
The above-mentioned work all addresses the tensor completion problem in which the locations of the missing entries are known, and moreover, observation noise is assumed to be Gaussian.
in practice, a fraction of the tensorial entries can be arbitrarily corrupted by some large errors, and the number and the locations of the corrupted entries are unknown.
contrasting
NeurIPS
train_791
A single inclusion mistake is sufficient for the OGMB learner to learn this hypothesis space.
the teacher can supply the KWIK learner with an exponential number of positive examples, because the KWIK learner cannot ever know that the target does not include all possible instances; this implies that the number of abstentions is not polynomially bounded.
contrasting
NeurIPS
train_792
To map the complex state h into a real output o r , we use a linear combination of the real and imaginary components, similar to [1], with W o and b o as weights and offset: In [1], it was proven that a unitary 4 W would prevent vanishing and exploding gradients of the cost function C with respect to h t , since the gradient magnitude is bounded.
this proof hinges on the assumption that the derivative of f a is also unity.
contrasting
NeurIPS
train_793
Two conditions are met in this scenario: (1) Temporal features are strong enough for classification tasks.
fine-grained spatial appearances prove to be less significant; (2) There are no complex visual structures to be modeled in the expected outputs so that spatial representations can be highly abstracted.
contrasting
NeurIPS
train_794
This was overcome by the use of Gradient Difference Loss (GDL) [14], which showed significant improvement over the past approaches when compared using similarity and sharpness measures.
this approach, although producing satisfying results for the first few predicted frames, tends to generate blurry results for predictions far away (∼6) in the future.
contrasting
NeurIPS
train_795
They demonstrate gains when parallelizing the computation across multiple machines in a cluster.
their approach requires the employed processing units to run in synchrony.
contrasting
NeurIPS
train_796
In [45], it was mentioned that any continuously differentiable value function (VF) can be approximated by increasing the number of independent basis functions to infinity in CT scenarios, and a CT policy iteration was proposed.
without resorting to the theory of reproducing kernels [3], determining the number of basis functions and selecting the suitable basis function class cannot be performed systematically in general.
contrasting
NeurIPS
train_797
A similar informal formulation appears in the work [1] that is devoted to optimizing a generalization of the ICA objective.
the actual problem considered only concerns the case of tree-structured dependence, which allows for a solution based on pairwise measurements of mutual information.
contrasting
NeurIPS
train_798
Until now the same could not be said for automatic speech recognition systems.
we have recently introduced a system which in many conditions performs this task better than humans [1][2].
contrasting
NeurIPS
train_799
Generic submodular maximization admits efficient algorithms that can attain approximate optima with global guarantees; these algorithms are typically based on local search techniques [16,35].
although polynomial-time solvable, submodular function minimization (SFM) which seeks to solve poses substantial algorithmic difficulties.
contrasting
NeurIPS