id
stringlengths
7
12
sentence1
stringlengths
5
1.44k
sentence2
stringlengths
6
2.06k
label
stringclasses
4 values
domain
stringclasses
5 values
train_0
In Bradley & Murray (2011) an iterative stochastic procedure is proposed for finding such matrix.
we did not find it to work very well in an online optimization setting, and therefore stick to the original equilibration matrix D E . Although the original motivation for row equilibration is to prevent round-off errors, our interest is in how well it is able to reduce the condition number.
contrasting
NeurIPS
train_1
In the most general case, the encoder can update A t and B t in an unconstrained fashion at each timestep t. From the decoder side, we do not know C and therefore we cannot know F C, an estimate of which is needed by the user to update the encoder.
the decoder sets F and can predict updates to [CA CB] directly, instead of to [A B] as the actual encoder does (equation 15).
contrasting
NeurIPS
train_2
Note that the proposed RankGAN has a Nash Equilibrium when the generator G θ simulates the humanwritten sentences distribution P h , and the ranker R φ cannot correctly estimate rank between the synthetic sentences and the human-written sentences.
as also discussed in the literature [8,9], it is still an open problem how a non-Bernoulli GAN converges to such an equilibrium.
contrasting
NeurIPS
train_3
We give a problem for which the sample complexity of the trivial algorithm is logarithmic in n, whereas it is linear in n for the natural class of algorithms that predicts with the linear combination of instances.
why should we consider learning problems that pick columns out of a random matrix?
contrasting
NeurIPS
train_4
The disadvantage of using the binary SVM is that, in general, the 0-1 loss is a poor approximation for the AP loss.
the quality of the approximation is not uniformly poor for all samples, but depends heavily on their separability.
contrasting
NeurIPS
train_5
In most cases, especially with a very large vocabulary, V is significantly larger than D, and the additional computation cost is negligible.
as V decreases, the portion of the overhead increases.
contrasting
NeurIPS
train_6
2 From the table it is seen that all the samplers perform equally well when the number of items is small (N = 8).
as the number of items increases SM significantly outperforms all other samplers.
contrasting
NeurIPS
train_7
Given Φ, we can compute the joint distribution over the prediction range for each time series analytically, as this joint distribution is a multivariate Gaussian.
in practice it is often more convenient to represent the forecast distribution in terms of K Monte Carlo samples, In order to generate prediction samples from a state space model, one first computes the posterior of the latent state p(l T |z 1:T ) for the last time step T in the training range, and then recursively applies the transition equation and the observation model to generate prediction samples.
contrasting
NeurIPS
train_8
Besides the superiority of finding good clusters, AP exhibits the surprising ability of handling large-scale data.
aP is computationally expensive to acquire clusters when the number of clusters is set in advance.
contrasting
NeurIPS
train_9
Two classes of computable Stein discrepancies-the graph Stein discrepancy [10,12] and the kernel Stein discrepancy (KSD) [7,11,19,21]-have since been developed to assess and tune Markov chain Monte Carlo samplers, test goodness-of-fit, train generative adversarial networks and variational autoencoders, and more [7, 10-12, 16-19, 27].
in practice, the cost of these Stein discrepancies grows quadratically in the size of the sample being evaluated, limiting scalability.
contrasting
NeurIPS
train_10
[10] study k-way clustering and show that the eigenvectors of the graph Laplacian are stable in 2-norm under small perturbations.
this justifies the use of k-means in the perturbed subspace since ideally without noise, the spectral embedding by the top k eigenvectors of the graph Laplacian reflects the true cluster memberships, closeness in 2-norm does not translate into a strong bound on the total number of errors made by spectral clustering.
contrasting
NeurIPS
train_11
In fact, it can be shown that a more rudimentary form of reweighted 1 applied to this model in [19] amounts to performing exactly one such iteration.
repeated execution of ( 9) is cheap computationally since it scales as O nm x (k+1) 0 , where typically x (k+1) 0 ≤ n, and is substantially less intensive than the subsequent 1 step given by (3).
contrasting
NeurIPS
train_12
Point processes are popular models of neural spiking behavior as they provide a statistical distribution over temporal sequences of spikes and help to reveal the complexities underlying a series of recorded action potentials.
the most common neural point process models, the Poisson process and the gamma renewal process, do not capture interactions and correlations that are critical to modeling populations of neurons.
contrasting
NeurIPS
train_13
Complementary techniques such as exact covariance thresholding [13,19], and the divide and conquer approach of [8], have also been proposed to speed up the solvers.
as noted in [8], the above methods do not scale to problems with more than 20, 000 variables, and typically require several hours even for smaller dimensional problems involving ten thousand variables.
contrasting
NeurIPS
train_14
Batch normalization tackles the issue by normalizing the output of neurons to zero mean and unit variance and then performing dropout independently 1 .
our proposed evolutional dropout tackles this issue from another perspective by exploiting a distribution-dependent dropout, which adapts the sampling probabilities to the evolving distribution of a layer's outputs.
contrasting
NeurIPS
train_15
One direct way to address this problem is to score workers using their past performance on similar problems.
this is not always practical, since historical records are hard to maintain for anonymous workers, and their past tasks may be very different from the current one.
contrasting
NeurIPS
train_16
Ideally, we would perform maximum likelihood estimation on the parameters, p ✓ (x r , z r )dz r , and compute the posterior p ✓ (z|x).
under a fLDS neither the p ✓ (z|x) nor p ✓ (x) are computationally tractable (both due to the noise model P and the nonlinear observation model f (•)).
contrasting
NeurIPS
train_17
The reconstruction error for each instance can be used as an anomaly score.
the reconstruction errors are not reliable because they are calculated from parameters that are estimated using data with anomalies by assuming that all of the instances are non-anomalous.
contrasting
NeurIPS
train_18
In particular, we expect that with random initialization, general stochastic gradient descent will need exponential time to escape saddle points in the worst case.
if we add perturbations per iteration or the inherent randomness is non-degenerate in every direction (so the covariance of noise is lower bounded), then polynomial time is known to suffice [Ge et al., 2015].
contrasting
NeurIPS
train_19
In fact, the empirical error with a finite margin is shown to converge to zero if , is sufficiently large.
the existence of a weak learner with error 1/2 -, is not always useful in terms of generalization error, since it applies even to the extreme case where the binary labels are drawn independently at random with equal probability at each point, in which case we cannot expect any generalization.
contrasting
NeurIPS
train_20
how do I move through the door without bumping into it [11,16,17,20,33,34].
our work focuses on the longer time-scale problem of path following e.g.
contrasting
NeurIPS
train_21
Finding large independent sets is a fundamental problem in algorithm design and analysis and computing ALPHA(G) is a classic NP-hard problem which is even very hard even to approximate [11].
the Lovász function ϑ(G) gives a tractable upper-bound and since then Lovász ϑ function has been extensively used in solving a variety of algorithmic problems e.g.
contrasting
NeurIPS
train_22
In recent years, the language model Latent Dirichlet Allocation (LDA), which clusters co-occurring words into topics, has been widely applied in the computer vision field.
many of these applications have difficulty with modeling the spatial and temporal structure among visual words, since LDA assumes that a document is a "bag-of-words".
contrasting
NeurIPS
train_23
We also observe this phenomenon on English→German translation.
we show that by using value network, such shortage can be largely avoided.
contrasting
NeurIPS
train_24
In the more general setting, an instantiation x can change hyperparameters all through the network, leading to different weights.
we believe that a single data instance will not usually lead to a dramatic change in the distributions.
contrasting
NeurIPS
train_25
In this case, it is not possible to correctly identify all the features with large probability.
we can show that FoBa can still select part of the features reliably, with good parameter estimation accuracy.
contrasting
NeurIPS
train_26
Thus, the eigen-distortion test reveals generalization failures in the CNN and VGG16 architectures that are not exposed by traditional methods of cross-validation.
the models with architectures that mimic biology (On-Off, LGG, LG) are constrained in a way that enables better generalization.
contrasting
NeurIPS
train_27
All of the above methods are designed for the batch setting, where all of the data is collected in advance and used at once.
if the training dataset is extremely large or the data are streaming and encountered in sequence, we may want to incrementally update the approximate posterior of the latent function f . Early work by Csató and Opper [6] proposed an online version of GPR, which greedily performs moment matching of the true posterior given one sample instead of the posterior of all samples.
contrasting
NeurIPS
train_28
Assuming that EP already yields a good approximation, the computation of a small number of these terms maybe sufficient to obtain the most dominant corrections.
when the leading corrections come out large or do not sufficiently decrease with order, this may indicate that the EP approximation is inaccurate.
contrasting
NeurIPS
train_29
All that is needed now is to show a separation of their "-optimal sets for 0 < " < 1 60 d 3 2 , and this is done by showing a separation of the more manageable sets S 1 and S 2 .
indeed, fix 0 < " < 1 60 d 3 2 and observe that for any w 2 S 1 we have and so, for d 4, for any w 2 S 2 we have We see that no w can exist in both S 1 and S 2 , so these sets are disjoint.
contrasting
NeurIPS
train_30
The most efficient message update schedule for tree structured models is a two-pass procedure where messages are first sent from the leaves to the root node, and then propagated backwards from the root to the leaves.
as with other message-passing algorithms, for tree structured instances the algorithm will converge with either a sequential or a parallel update schedule, with any initial condition for the messages.
contrasting
NeurIPS
train_31
1, we provide NeuralFDR, a practical end-to-end algorithm to the multiple hypotheses testing problem where the hypothesis features can be continuous and multi-dimensional.
the currently widely-used algorithms either ignore the hypothesis features (BH [3], Storey's BH [21]) or are designed for simple discrete features (group BH [13], IHW [15]).
contrasting
NeurIPS
train_32
The running time of the algorithm is O nM(r)d where r = min(r 1 , r 2 ).
at smaller scales, where M(r) is comparable with n, it is O n 2 d . since the variance of the estimate also tends to be smaller at smaller scales, the algorithm iterates less for the same accuracy.
contrasting
NeurIPS
train_33
The most common way around this is to encourage sparsity during training by way of a penalty function on the expected conditional hidden unit activations given data [10].
this training-time procedure is a heuristic and does not guarantee sparsity at test time.
contrasting
NeurIPS
train_34
The metric is often taken to be Euclidean, Manhattan or χ 2 distance.
it is well known that in many cases these choices are suboptimal in that they do not exploit statistical regularities that can be leveraged from labeled data.
contrasting
NeurIPS
train_35
Alike AT, β of FT is set to 10 3 in ImageNet and PASCAL VOC 2007.
we set it to 5 × 10 2 in CIFAR-10 and CIFAR-100 because a large β hinders the convergence.
contrasting
NeurIPS
train_36
The technique for estimating bandable covariance matrices proposed in [6] is shown to achieve the optimal rate of convergence.
no such theoretical guarantees have been shown for the bandable precision estimator proposed in recent work for estimating sparse and smooth precision matrices that arise from cosmological data [15].
contrasting
NeurIPS
train_37
In the literatures of manifold learning, many methods have been proposed to construct these adjacency matrices locally, e.g., via heat kernel function [2].
in the context of manifold alignment, there might be partial alignment cases, in which some points on one manifold might not be corresponded to any points on the other manifold.
contrasting
NeurIPS
train_38
With the increasing popularity of machine learning techniques, it has become common to see prediction algorithms operating within some larger process.
the criteria by which we train these algorithms often differ from the ultimate criteria on which we evaluate them.
contrasting
NeurIPS
train_39
This establishes minimax optimality of the DS/LOO estimators for functionals of two distributions when s ≥ d/2.
when s < d/2 there is a gap between our upper and lower bounds.
contrasting
NeurIPS
train_40
Corel hist 20,000 histograms (64-dimensional) of color thumbnail-sized images taken from the COREL STOCK PHOTO library.
of the 64 dimensions, only 44 of them contain non-zero entries.
contrasting
NeurIPS
train_41
Interestingly, because log |Σ| ≤ tr(Σ)−M , the trace norm is the convex envelope for the log-determinant, and thus the two minimization problems are somehow doing similar things.
the framework introduced in this paper goes beyond the two methods by introducing an informative kernel Ω between tasks.
contrasting
NeurIPS
train_42
With the notation z i = L • q(i) − ψ i and z 0 = −ψ 0 we re-write the minimax quantity as Observe that without the constraint that p t (i) ≤ 1/L for i > 0 we would put all the probability mass on the maximum of the z i .
with the constraint the maximizer put as much probability mass as allowed on the maximum coordinate argmax i∈{0,...,K} z i and continues to the next highest quantity.
contrasting
NeurIPS
train_43
Their approach augmented a traditional supervised learning algorithm with distribution information made available from the unlabeled data.
this paper considers a method for augmenting a traditional unsupervised learning problem with the addition of equivalence classes.
contrasting
NeurIPS
train_44
The fixed order of the 441 features can be considered acceptable since any input-output mapping can in principle be learned, assuming we have sufficient training data (and an appropriate network architecture).
if the amount of training data is limited then a better-structured, more compact representation might be of great advantage as opposed to requiring to see most of the possible configurations of co-evolution.
contrasting
NeurIPS
train_45
The existence of shift parameters seem to require extra additions/subtractions (see ( 2) and ( 8)).
the binarization operation with a shift parameter can be implemented as a comparator where the shift parameter is the number for comparison, e.g., H v (R) = ⇢ 1, R 0.5 v; 1, R < 0.5 v. (0.5 v is a constant), so no extra additions/subtractions are involved.
contrasting
NeurIPS
train_46
There also exists a few works that use GAN to impute the missing values [46].
what these works focused is non-sequential dataset and they have not adopted pertinent measures to process the temporal relations.
contrasting
NeurIPS
train_47
As feedback SNR increases, we expect the BER to decrease.
as shown in Figure 4 (Left), the C-L scheme, which is designed for noisy feedback, and S-K scheme are very sensitive to noise in the feedback, and reliability is almost independent of feedback quality.
contrasting
NeurIPS
train_48
If this can be achieved, the resulting GIS procedure will be unbiased via the arguments of the previous section.
the G -weights must not only satisfy the constraint (1), they must also be efficiently calculable from a given sample.
contrasting
NeurIPS
train_49
For all i ∈ N , all θ i ∈ Θ i and all a i ∈ A i,θi , we have u i (σ Standard BN inference methods could be used to compute E[U ai |θ i ].
such standard algorithms do not take advantage of structure that is inherent in BAGGs.
contrasting
NeurIPS
train_50
As noted earlier, the running time of the batch algorithm goes up as t increases (as it has to optimize over the entire past).
the running time of the online algorithm is independent of the past and only depends on the number of documents introduced in each timestep (which in this case is always 1000).
contrasting
NeurIPS
train_51
Both of these algorithms were also compared with the SVM equipped with a RBF kernel of variance σ 2 and a soft margin parameter C. Each SCM algorithm used the L 2 metric since this is the metric present in the argument of the RBF kernel.
in contrast with , each SCM was constrained to use only balls having centers of the same class (negative for conjunctions and positive for disjunctions).
contrasting
NeurIPS
train_52
While discussing learning the kernel, we showed that L 1 and L 2 cannot be updated simultaneously in a CCCP-style iteration since g is not convex over (S 1 , S 2 ).
it can be shown that g is geodesically convex over the Riemannian manifold of positive definite matrices, which suggests that deriving an iteration which would take advantage of the intrinsic geometry of the problem may be a viable line of future work.
contrasting
NeurIPS
train_53
However, a limitation of SDPs is their computational complexity [1], which has restricted their application to small scale problems [6].
an important special case of SDPs are quadratically constrained quadratic programs (QCQP) which are computationally more efficient.
contrasting
NeurIPS
train_54
[45] used a deep network to project dense trajectory features from different views into a canonical view.
most of the previous methods require access to 3D human pose information (e.g.
contrasting
NeurIPS
train_55
Deep learning has become a ubiquitous technology to improve machine intelligence.
most of the existing deep models are structurally very complex, making them difficult to be deployed on the mobile platforms with limited computational power.
contrasting
NeurIPS
train_56
Over the years, several methods have been proposed to solve coupled tensor completion (Acar et al., 2014;Ermis et al., 2015).
many of these methods are non-convex models leading to local optimal solutions.
contrasting
NeurIPS
train_57
Also purely feed-forward network architectures were proposed [3].
such networks become unfeasibly large for practical applications.
contrasting
NeurIPS
train_58
TiMT, however, follows by a slim margin.
tiWnet explains the similarities exclusively by adding more (unnecessary) edges, which is reflected in its increased, but strongly consistent false positive rate.
contrasting
NeurIPS
train_59
At first sight, the difference between move-making algorithms and the LP relaxation appears to be the standard accuracy vs. speed trade-off.
for some special cases of distance functions, it has been shown that appropriately designed move-making algorithms can match the theoretical guarantees of the LP relaxation [14,15,20].
contrasting
NeurIPS
train_60
The problem of estimating the sparse precision matrix in Gaussian graphical models has been studied by a large body of literature [23,29,12,28,6,34,37,38,33].
the real world data may not follow a sparse GGM, especially when some of the variables are unobservable.
contrasting
NeurIPS
train_61
The introduction of local max-pooling layers in CNNs has helped to satisfy this property by allowing a network to be somewhat spatially invariant to the position of features.
due to the typically small spatial support for max-pooling (e.g.
contrasting
NeurIPS
train_62
Although dimensionality reduction can be done posthoc using PCA, [10] shows that this doesn't lead to performance improvement.
we show in §4 that selecting k v can improve the performance of SRM beyond that attained by HA.
contrasting
NeurIPS
train_63
The difference of the learner's total reward and the total reward of the optimal strategy is called the (Audibert et al., 2009) of the algorithm and it can be formally written as As compared to the regret, the pseudo-regret has the same expected value, but lower variance because the additive noise ⌘ t is removed.
the omitted quantity is uncontrollable, hence we have no interest in including it in our results (the omitted quantity would also cancel, if ⌘ t was a sequence which is independently selected of X 1:t .) In what follows, for simplicity we use the word regret instead of the more precise pseudo-regret in connection to R n . The goal of the algorithm is to keep the regret R n as low as possible.
contrasting
NeurIPS
train_64
In the MAB literature, several recent works consider multi-player MAB scenarios in which players actually compete with each other, either on arm-pulls resources [15] or on the rewards received [19].
we study a collaborative multi-player problem and investigate how sharing observations helps players achieve their common goal.
contrasting
NeurIPS
train_65
On the one hand, it is usually acceptable to output a policy that is only locally optimal with respect to the optimization objective.
in many application scenarios where constraints encode safety requirements or the amount of available resources, violating the constraint even by a small amount may have significant consequences.
contrasting
NeurIPS
train_66
Column normalization can be viewed as a principled first step towards solving challenging sparse estimation problems.
when non-convex sparse regularizers are used for the image penalty, e.g., p norms with p < 1, then local minima can be a significant problem.
contrasting
NeurIPS
train_67
In simple cases, topic models can be used to cluster local textural elements, coarsely representing categories via a bag of visual features [1,2].
spatial structure plays a crucial role in general scene interpretation [3], particularly when few labeled training examples are available.
contrasting
NeurIPS
train_68
For example, Node A concerns washbasins for infants, and has two polarized children nodes: reviewers take a positive perspective when their children enjoy the product (Node B: "loves", "splash", "play") but have negative reactions when it leaks (Node C: "leak(s/ed/ing)").
the lowest topics in the hierarchy are often polarized; one child topic of "router" focuses on upgradable firmware such as "tomato" and "ddwrt" (Node E, positive) while another focuses on poor "tech support" and "customer service" (Node F, negative).
contrasting
NeurIPS
train_69
The teaching set of a concept c ∈ C is a set of indices (or examples) X ⊆ [n] that uniquely identifies c from C. Formally, given a concept class C ⊆ {0, 1} n (a set of binary strings of length n), X ⊆ [n] is a teaching set for a concept c ∈ C (a binary string in C) if X satisfies c| X = c | X , for all other concepts c ∈ C. The teaching dimension of a concept class C is the smallest number t such that every c ∈ C has a teaching set of size no more than t [GK95, SM90].
teaching dimension does not always capture the cooperation in teaching and learning (as we will see in Example 2), and a more optimistic and realistic notion of recursive teaching dimension has been introduced and studied extensively in the literature [Kuh99, DSZ10, ZLHZ11, WY12, DFSZ14, SSYZ14, MSWY15].
contrasting
NeurIPS
train_70
(2013) and references therein).
recent work (Dauphin et al., 2014;Choromanska et al., 2014) has brought theoretical and empirical evidence suggesting that local minima are with high probability not the main obstacle to optimizing large and deep neural networks, contrary to what was previously believed: instead, saddle points are the most prevalent critical points on the optimization path (except when we approach the value of the global minimum).
contrasting
NeurIPS
train_71
Recent progress in deep generative models has led to tremendous breakthroughs in image generation.
while existing models can synthesize photorealistic images, they lack an understanding of our underlying 3D world.
contrasting
NeurIPS
train_72
As expected, Hotelling, Edist and MMD perform best for differences in the Gaussian distribution (column 4).
in all other settings Hotelling's test has poor power, and our approach with minP as the univariate test has more power than Edist and MMD in columns 5-7.
contrasting
NeurIPS
train_73
By its definition, the sampler q * (u j ) is computed based on the hidden feature h (l) (u j ) that is aggregated by its neighborhoods in previous layers.
under our top-down sampling framework, the neural units of lower layers are unknown unless the network is completely constructed by the sampling.
contrasting
NeurIPS
train_74
In general, the exact form of this pooling function is determined by the complex interaction between the MSR and agent utility, and a closed form of p i from (4) might not be attainable in many cases.
given a paticular MSR, we can venture to identify agent utility functions which give rise to well-known opinion pools.
contrasting
NeurIPS
train_75
In particular, it should be possible to increase the number of limited capacity units in a population to form a more precise representation of the sensory signal.
to the best of our knowledge, such a code has not been characterized analytically, even in the simplest case.
contrasting
NeurIPS
train_76
[19] use a simple linear algorithm for this step, which takes O(|P|) time.
we propose a more efficient algorithm to maximize δ j (•), which exploits the special structure of this discrete function.
contrasting
NeurIPS
train_77
On the one hand, we want to be expressive, and learn all the transitions possible from every o within a horizon h. When o is a high dimensional image observation, this typically requires mapping the image to an extensive feature space [30,12].
however, we want to plan efficiently, which generally requires either low dimensional state spaces or well-structured representations.
contrasting
NeurIPS
train_78
by D-softmax is restricted from the start, and may therefore be lacking in terms of expressiveness.
our algorithm first trains words with a full-length vector and dynamically limits the dimension during evaluation.
contrasting
NeurIPS
train_79
When constrained to only two labels, their results provide a bipartite SBM.
in the bipartite SBM case, [17] has two drawbacks compared to the results presented here: (1) The data-generating process in [17] rules out certain nested structures of the sets V i .
contrasting
NeurIPS
train_80
Discriminator D. In traditional GANs, the discriminator distinguishes between real groundtruth images and fake generated images (which is generated from random noise).
in our conditional network, G2 takes the condition image I A instead of a random noise as input.
contrasting
NeurIPS
train_81
For the second category of global landscape analysis, the typical result is that every local minimum is a global minimum.
even for single-layer networks, strong assumptions such as over-parameterization, very special neuron activation functions, fixed second layer parameters and/or Gaussian data distribution are often needed in the existing works.
contrasting
NeurIPS
train_82
All prior methods select the local neighborhood based on proximity, and they typically fix its size.
our idea is to predict the set of training instances that will produce an effective discriminative model for a given test instance.
contrasting
NeurIPS
train_83
Recent work has shown that randomized value functions can implement something similar to Thompson sampling without the need for an intractable exact posterior update.
this work is restricted to linearly-parameterized value functions [16].
contrasting
NeurIPS
train_84
Note that the costs may be either positive or negative.
only their relative values are important.
contrasting
NeurIPS
train_85
For GMRFs with n nodes indexing d-dimensional random subvectors, I(x R ; x A ) can be computed exactly in O((nd) 3 ) via Schur complements/inversions on the precision matrix J.
certain graph structures permit the computation via belief propagation of all local pairwise MI terms I(x i ; x j ), for adjacent nodes i, j ∈ V in O(n • d 3 ) -a substantial savings for large networks.
contrasting
NeurIPS
train_86
This approach is appropriate when objects can be partitioned into relatively homogeneous subsets.
the properties of many objects are better captured by representing each object using multiple latent features.
contrasting
NeurIPS
train_87
The partial derivatives take a standard log-sum-exp form, requiring expectations E p(y A naive computation of this expectation would require summing over D k= d y d configurations.
there are more efficient alternatives: the dynamic programming algorithms developed in the context of Poisson-Binomial distributions are applicable, e.g., the algorithm from [3] runs in O(Dk) time.
contrasting
NeurIPS
train_88
In the fully supervised setting, one can sidestep the task of estimating conditional probabilities by directly learning a classifier in a discriminative fashion.
in unsupervised or semi-supervised settings, a reliable estimate of the conditional distributions becomes important.
contrasting
NeurIPS
train_89
One possibility is to start at the position of the deleted point, θ i−1 , on the contour constraint, which is independent of the other points and not far from the bulk of the required uniform distribution.
if the Markov chain mixes slowly amongst modes, the new point starting at θ i−1 may be trapped in an insignificant mode.
contrasting
NeurIPS
train_90
REINFORCE-OB contributes highly to reducing the variance especially when T is large, which also well agrees with our theory.
pGPE-OB still provides much smaller variance than REINFORCE-OB.
contrasting
NeurIPS
train_91
Compared with [12] on subset evaluations, our method significantly improve over [12] on horse-riding and running detection.
[12] provides better detection results than ours on diving detection.
contrasting
NeurIPS
train_92
Their runtime is independent of, or even decrease with, the number of training samples [5,6].
because of their simplicity, these methods have a slow convergence rate, and thus may require a large number of iterations.
contrasting
NeurIPS
train_93
We find that a CNN augmented with an RN achieves an accuracy above 94% for both relational and non-relational questions.
a augmented with an MLP only reached this performance on the non-relational questions, plateauing at 63% on the relational questions.
contrasting
NeurIPS
train_94
To see this, note that the sequence of partitions P 0,bad n becomes arbitrarily ill-balanced, which from (10) implies lim n→∞ Pcut G 0 n (P 0,bad n ) = 1.
the unperturbed graph G n grows in a self-similar fashion as n → ∞ and so the Product Cut of P n remains approximately a constant, say γ, for all n. Thus Pcut Gn (P n ) ≈ γ < 1 for n large enough, and Pcut G 0 n (P 0,good Comparing this upper-bound with the fact lim n→∞ Pcut G 0 n (P 0,bad n ) = 1, we see that the Product Cut of P 0,bad n becomes eventually larger than the Product Cut of P 0,good n . While we execute this program in full only for the example above, this line of argument is fairly general and similar stability estimates are possible for more general families of graphs.
contrasting
NeurIPS
train_95
At the beginning of learning when Ja is of small magnitude, the diffusion term, ⟨F 2 a ⟩, has a large impact so that it greatly impedes learning for a large η case.
as the magnitude of the differences JA − JB increases, this effect weakens and the dependence of p A on η becomes quite small.
contrasting
NeurIPS
train_96
3.1 Algorithm for Learning the Transitive Reduction of CBNs [13] show that by using O (log n) multiple-vertex interventions, one can recover the transitive reduction of a DAG.
in this case, each set of intervened variables has a size of O( n /2), which means that the method of [13] has to perform a total of O(2 n /2 log n) experiments, one for each possible setting of the O( n /2) intervened variables (see an example of this in Appendix D).
contrasting
NeurIPS
train_97
Specifically, we set Q(x) to be sigmoidal with parameter λ (see Figure 2a): As λ → ∞, Q(x; λ) → 1{x > 0}, so R labelwise (w : D) approaches the objective function defined in (7).
r labelwise (w : D) is smooth for any finite λ > 0.
contrasting
NeurIPS
train_98
Any nonlinear Wiener functional, for instance, creates infinitely many correlations or cumulants of higher order, and often also of lower order.
a Wiener functional of order n produces only harmonic phase interactions up to order n + 1, but sometimes also of lower orders.
contrasting
NeurIPS
train_99
In small or medium networks, we can rely on well-known numerical methods to compute matrix exponentials [24].
in large networks, the explicit computation of Ψ(t) becomes intractable.
contrasting
NeurIPS