id
stringlengths
7
12
sentence1
stringlengths
5
1.44k
sentence2
stringlengths
6
2.06k
label
stringclasses
4 values
domain
stringclasses
5 values
train_300
If only a single plastic synapse is taken into consideration, covariance learning rules seem to make matching behavior a steady state of learning.
under certain situations where a large number of synapses simultaneously modify their efficacy, matching behavior cannot be a steady state.
contrasting
NeurIPS
train_301
Unfortunately, this approximate equivalence between excitatory and inhbitory neurons is inconsistent with the anatomical observation that only about 15% of cortical neurons are inhibitory.
the original architecture could probably work if we had larger populations of map neurons, more synapses, and/or NMDA-like synapses with longer time constants.
contrasting
NeurIPS
train_302
Then we have a formula for f and a subgradient: f@xA a n k=1 x@kA@f@S k A f @S k 1 AA; @ f@xA Q n k=1 e (k) @f@S k A f @S k 1 AA: Equation (2) was used to show that submodular minimization can be achieved in polynomial time [16].
algorithms which directly minimize the Lovasz extension are regarded as impractical.
contrasting
NeurIPS
train_303
. . , a n . We will analyze the following popular non-convex objective, (1.1) Previously, Ge et al.
[GHJY15] show that for the orthogonal case where n ≤ d and all the a i 's are orthogonal, objective function f (•) have only 2n local maxima that are approximately ± 1 the technique heavily uses the orthogonality of the components and is not generalizable to over-complete case.
contrasting
NeurIPS
train_304
We think of X as "public," while and are private and only needed at the time of compression.
even with = 0 and known, recovering X from X requires solving a highly under-determined linear system and comes with information theoretic privacy guarantees, as we demonstrate.
contrasting
NeurIPS
train_305
The advantages of such formulation are twofold: the dimension of the optimization variable is reduced, and positive semidefiniteness is naturally enforced.
optimization in Y is non-convex.
contrasting
NeurIPS
train_306
recently [9] propose to use Variational Autoencoder as a heuristic way to recover the latent confounders from multiple proxies.
matrix factorization methods, despite stronger parametric assumptions, address the problem of missing values simultaneously, require considerably less parameter tuning, and have theoretical justifications.
contrasting
NeurIPS
train_307
Options are temporally-extended actions that, like HRA's heads, can be trained in parallel based on their own (intrinsic) reward functions.
once an option has been trained, the role of its intrinsic reward function is over.
contrasting
NeurIPS
train_308
3b shows that the condition number of A becomes worse as the degree d becomes larger, and as more probability mass is assigned to the dense part G d of the transition matrix T , providing some weak evidence for the necessity of Condition 3.
also, recall that Theorem 1 shows that HMMs where the transition matrix is a random walk on an undirected regular graph with large degree (degree polynomial in n) cannot be learned using polynomially many samples if m is a constant with respect to n. such graphs have all eigenvalues except the first one to be less than O(1/ √ d), hence it is not clear if the hardness of learning depends on the large degree itself or is only due to T being ill-conditioned.
contrasting
NeurIPS
train_309
( 1), we need to marginalize over all possible global transformations T . In our current implementation we used only global shifts, and assumed uniform distributions over all shifts, i.e., P (T |H ref ) = 1/|ref |.
the algorithm can accommodate more complex global transformations.
contrasting
NeurIPS
train_310
The second factor is mixing filters: algorithms typically seek, and directly optimize, a transformation that would unmix the sources.
in many situations, the filters describing medium propagation are non-invertible, or have an unstable inverse, or have a stable inverse that is extremely long.
contrasting
NeurIPS
train_311
To date, matrix and tensor decomposition has been extensively analyzed, and there are a number of variations of such decomposition (Kolda and Bader, 2009), where the common goal is to approximate a given tensor by a smaller number of components, or parameters, in an efficient manner.
despite the recent advances of decomposition techniques, a learning theory that can systematically define decomposition for any order tensors including vectors and matrices is still under development.
contrasting
NeurIPS
train_312
The proposed framework is motivated by the operator-theoretic view of nonlinear dynamical systems.
learning a generative (state-space) model for nonlinear dynamical systems directly has been actively studied in machine learning and optimal control communities, on which we mention a 9) and white Gaussian observation noise and (right) the estimated Koopman eigenvalues.
contrasting
NeurIPS
train_313
By choosing w = (1, • • • , 1) we obtain a trivial (k, 0)-coreset.
in a more efficient coreset most of the weights will be zero and the corresponding rows in A can be discarded.
contrasting
NeurIPS
train_314
First, the accuracy of the belief model β(τ ) in Equation 1 is highly dependent on the performance model P (U |τ, π), which evaluates the policy π behaving against the opponent using policy τ , named response policy.
the performance model of a response policy against different strategies might be the same in multiagent domains, resulting in indistinguishability of the belief model and thus inaccurate detection.
contrasting
NeurIPS
train_315
This has been formalized by "differential privacy", which provides bounds on the maximum disclosure risk [15].
differential privacy hinges on the benevolence of an organization to which you give your data: the privacy of individuals is preserved as long as organizations which collect and analyze data take necessary steps to enforce differential privacy.
contrasting
NeurIPS
train_316
It behaves like e y for y < 0, but grows linearly for positive y.
it exhibits grave problems for latent state forecasting.
contrasting
NeurIPS
train_317
If G is a tree, this is obviously satisfied.
the result holds on any graph for which: the subgraph induced by Pu:v is a chain; and every i ∈ Pu:v separates N (i) \ Pu:v from Pu:v \ {i}, where N (i) {j : {i, j} ∈ E} is the neighbor set of i.
contrasting
NeurIPS
train_318
The cover tree, which is designed exclusively for NNS, shows slightly better query performance than ours.
the MS-distance is more general and flexible: it supports addition of a new vector to the data set (our data structure) in O(d) time for computing the mean and the standard deviation values of the vector.
contrasting
NeurIPS
train_319
If the algorithm guarantees a zero regret against the competitor with zero L 2 norm, then there exists a sequence of T vectors in X , such that the regret against any other competitor is Ω(T ).
if the algorithm guarantees a regret at most of > 0 against the competitor with zero L 2 norm, then, for any 0 < η < 1, there exists a T 0 and a sequence of T ≥ T 0 unitary norm vectors z t ∈ X , and a vector u ∈ X such that The proof can be found in the supplementary material.
contrasting
NeurIPS
train_320
The learning process is an iterative algorithm that alternates between fixing latent values and optimizing the latent SVM objective function.
we case part learning as finding maximal cliques in a weighted graph of image patches.
contrasting
NeurIPS
train_321
To point out a few, random averaging projection method [28] handles multiple constraints simultaneously but cannot deal with regularizers.
accelerated stochastic gradient descent with proximal average [29] can handle multiple regularizers simultaneously, but the algorithm imposes a Lipschitz condition on regularizers, and hence, it cannot deal with constraints.
contrasting
NeurIPS
train_322
Deep multi-layer neural networks have many levels of non-linearities allowing them to compactly represent highly non-linear and highly-varying functions.
until recently it was not clear how to train such deep networks, since gradient-based optimization starting from random initialization appears to often get stuck in poor solutions.
contrasting
NeurIPS
train_323
There are various general notions about their roles, such as regulating sleeping and waking [13] and changing the signal to noise ratios of cortical neurons [11].
these are slowly giving way to more specific computational ideas [20,7,10,24,25,5], based on such notions as optimal gain scheduling, prediction error and uncertainty.
contrasting
NeurIPS
train_324
The bandit algorithm of Garber (2017) additionally requires a certain boundedness property of barycentric spanners, namely: max q > i Q 2 q i  .
for certain bounded sets this quantity may be unbounded, such as the two-dimensional axisaligned rectangle with one axis being of size unity, and the other arbitrarily small.
contrasting
NeurIPS
train_325
The weights reflect the similarities between the target therapy and all training therapies.
the therapy-specific approaches do not address the bias originating from the different treatment backgrounds of the samples, or the missing treatment history information.
contrasting
NeurIPS
train_326
It is trivial to observe that these nodes will be be equal to −1 (+1) in the optimal solution and that eliminating them does not affect solving (5) without them.
in practice, this trivial reduction has a computationally minimal affect on large data sets.
contrasting
NeurIPS
train_327
Also, (α − γ)/α can go to zero at a suitable rate.
the expected number of common neighbors between nodes tends to zero for sparser graphs, irrespective of whether the nodes are in the same cluster or not.
contrasting
NeurIPS
train_328
This coupling is achieved by repeatedly performing inference and updating a set of dual parameters until convegence.
we perform inference independently in each sub-model only once, and reason about individual variables using the sums of max-marginals.
contrasting
NeurIPS
train_329
We also tried to run the other approaches with data generated from Gaussian distributions but the results were approximately equal to those shown in Figure 3.
our approach performs similarly but the number of reversed links increases significantly since the model is no longer identified.
contrasting
NeurIPS
train_330
In [Zhou et al., 2016], the range of activations is constrained within [0,1], which seems to avoid this situation.
fractional numbers do not solve this problem, severe precision deterioration will appear during the multiplication if there are no extra resources.
contrasting
NeurIPS
train_331
When computing Precision T (R, P i ) for a single predicted anomaly range P i , there is no need for an existence reward, since precision by definition emphasizes prediction quality, and existence by itself is too low a bar for judging the quality of a prediction (i.e., α = 0).
the overlap reward is still needed to capture the cardinality, size, and position aspects of a prediction.
contrasting
NeurIPS
train_332
For instance, for 37.5% compression, the original squeezed net only achieves 54.7%.
our proposed method lifts it up to 59.4% with a deep teacher (VGG16), which is even better than the uncompressed AlexNet model 57.2%.
contrasting
NeurIPS
train_333
[13], who show polynomial sample complexity results for learning influence in the LT and IC models (under partial observation).
their approach uses approximations to influence functions and consequently requires a strong technical condition to hold, which is not necessarily satisfied in general.
contrasting
NeurIPS
train_334
In the experts setting we have seen that the learner can distribute a prior amongst the actions and obtain a bound on the regret depending in a natural way on the prior weight of the optimal action.
in the bandit setting the learner pays an enormously higher price to obtain a small regret with respect to even a single arm.
contrasting
NeurIPS
train_335
These techniques promote sparsity in determining a small set of codewords from the dictionary that can be used to efficiently represent each visual descriptor of each image [13].
those approaches consider each visual descriptor in the image as a separate coding problem and do not take into account the fact that descriptor coding is just an intermediate step in creating a bag of codewords representation for the whole image.
contrasting
NeurIPS
train_336
Moreover, we note that counting the number of possible outputs of ERM also has connections to a counting argument made in [1] in the context of pricing mechanisms.
in essence the argument there is restricted to transductive settings where the sample "features" are known in advance and fixed and thereby the argument is much more straightforward and more similar to standard notions of "effective hypothesis space" used in VC-dimension arguments.
contrasting
NeurIPS
train_337
A better one is to handcraft a policy that chooses an action based on the history of actions and observations, a technique used in [18].
it is often difficult to handcraft effective history-based policies.
contrasting
NeurIPS
train_338
The reinforcement learning controller is encouraged by the reward structure to accomplish each movement as quickly as possible.
it faces high uncertainty in the plant behavior.
contrasting
NeurIPS
train_339
First, continuously projecting at every step helps to reduce overfitting, as can be observed by the slower decline of the blue curve (upper smooth curve) compared to the orange curve (lowest curve).
when projection is performed after many steps, (instead of continuously), performance of the projected model actually outperforms the continuous-projection model (upper jittery curve).
contrasting
NeurIPS
train_340
Time series are ubiquitous in many classification/regression applications.
the time series data in real applications may contain many missing values.
contrasting
NeurIPS
train_341
In this case, it is rigorously known [21,22,23] that detection is impossible by any algorithm for the SBM with a group structure weaker than where ϵ ≡ ρ out /ρ in .
it is important to note that this is the information-theoretic limit, and the achievable limit for a specific algorithm may not coincide with Eq.
contrasting
NeurIPS
train_342
The subsequent use of Lloyd's algorithm to refine the solution only guarantees that the solution quality does not deteriorate and that it converges to a locally optimal solution in finite time.
using naive seeding such as selecting data points uniformly at random followed by Lloyd's algorithm can produce solutions that are arbitrarily bad compared to the optimal solution.
contrasting
NeurIPS
train_343
This seems to imply that scaling up these models to large RFs is relatively easy.
these models need to be trained on audio excerpts that are at least as long as their RFs, otherwise they cannot capture any structure at this timescale.
contrasting
NeurIPS
train_344
For many nonparametric divergence estimators the large sample consistency has already been established and the mean squared error (MSE) convergence rates are known for some.
there are few results on the asymptotic distribution of non-parametric divergence estimators.
contrasting
NeurIPS
train_345
Rather, we maintain a Gaussian approximation of the posterior on the full space, Θ.
when optimizing our stimuli we combine our posterior with our knowledge of M in order to do a better job of maximizing the informativeness of each experiment.
contrasting
NeurIPS
train_346
( 1) can be rewritten as the Bellman equation The advantage of using the Bellman equation is that it describes the relationship between the value function at one state s and its immediate follow-up states s ∼ p(s |s, a).
the direct computation of Eq.
contrasting
NeurIPS
train_347
the least squares projection), one can adopt a robust loss such as Huber's loss as the distance, which often gives a better result (robust projection [9]).
a major drawback of these projection approaches is that all pixels are updated by the projection.
contrasting
NeurIPS
train_348
After this initial conservative phase, CLUCB has learned enough about the optimal action and its performance starts converging to that of LUCB.
figure 1 shows that per-step regret of CLUCB at the first few periods remains much lower than that of LUCB.
contrasting
NeurIPS
train_349
Also, that the localization error of the GPPS system degrades only slowly when the number of calibration measurements is reduced.
the curves for the nearest neighbor based method show a sharper increase of positioning error.
contrasting
NeurIPS
train_350
For the variational scale parameters, ln g s , we see that early on the HVP+Local approximation is able to reduce parameter variance by a large factor (≈ 2,000×).
at later iterates the HVP+Local scale parameter variance is on par with the Monte Carlo estimator, while the full Hessian estimator still enjoys huge variance reduction.
contrasting
NeurIPS
train_351
Hershey used the power from the spectrogram in his algorithm to detect the visual motion.
our result for spectrogram data is in the noise, indicating that a linear model can not use spectrogram data for fine-grain temporal measurements.
contrasting
NeurIPS
train_352
Recently, there has been substantial interest in using large amounts of unlabeled data to learn word representations which can then be used as features in supervised classifiers for NLP tasks.
most current approaches are slow to train, do not model the context of the word, and lack theoretical grounding.
contrasting
NeurIPS
train_353
The IBP places a prior distribution over binary matrices where the number of columns (features) K is not bounded, i.e., K → ∞.
given a finite number of data points N , it ensures that the number of non-zero columns K + is finite with probability one.
contrasting
NeurIPS
train_354
However, due to the variance of the estimator ŝt , sEM has a slower asymptotic convergence rate than bEM for finite data sets.
specifically, let s * = F (s * ) be a stationary point, Cappe and Monlines [6] showed that Dempster et al.
contrasting
NeurIPS
train_355
Autoregressive feedback is considered a necessity for successful unconditional text generation using stochastic sequence models.
such feedback is known to introduce systematic biases into the training process and it obscures a principle of generation: committing to global information and forgetting local nuances.
contrasting
NeurIPS
train_356
Many researchers attempted to solve goal-oriented dialog tasks by using the deep supervised learning (deep SL) approach [8] based on seq2seq models [9] or the deep reinforcement learning (deep RL) approach utilizing rewards obtained from the result of the dialog [10,11].
these methods struggle to find a competent RNN model that uses back-propagation, owing to the complexity of learning a series of sentences.
contrasting
NeurIPS
train_357
This up-front computation is not needed for any of the MLE algorithms described above.
each of the MLE algorithms requires some initial value for , but no such initialization is needed to find the OLS estimator in Algorithm 1.
contrasting
NeurIPS
train_358
Ψ(S) > δ, and its support x S is smaller or equal to min(n 1 , n 2 ), then all children S of S, which satisfy S ⊂ S by construction of the enumeration tree, will also be non-testable and can be pruned from the search space.
such a monotonicity property does not hold for the CMH minimum attainable p-value function Ψ cmh (S), severely complicating the development of an effective pruning criterion.
contrasting
NeurIPS
train_359
. . , f4 by minimizing the Kullback-Leibler (KL) divergence between the product of Q \a and f a and the product of Q \a and fa , where Q \a is the ratio between Q and fa .
this does not perform well for refining f2 ; details on this problem can be found in Section 4 of the supplementary material and in [19].
contrasting
NeurIPS
train_360
That is, deep learning introduces good function classes that may have a low capacity in the VC sense while being able to represent target functions of interest well.
deep learning requires us to deal with seemingly intractable optimization problems.
contrasting
NeurIPS
train_361
Recovering X * from Q * As for linear programming, recovering a primal optimal solution directly from a dual optimal solution is not always possible for SDPs.
at least for the hard-margin problem (no slack) this is possible, and we describe below how an optimal prediction matrix X * can be recovered from a dual optimal solution Q * by calculating a singular value decomposition and solving linear equations.
contrasting
NeurIPS
train_362
As shown in Figure 2, when the rank increases from 10 to 300, PMF can achieve RMSEs between 0.86 and 0.88.
the RMSE of MRMA is about 0.84 when mixing all these ranks from 10 to 300.
contrasting
NeurIPS
train_363
This proof relies only on commonly applicable, fairly general assumptions, thus rendering a generic result not constraining the design of larger networks.
in which way the timing of the third factor is implemented in networks will be an important issue when constructing such networks.
contrasting
NeurIPS
train_364
Specifically in this case, one can obtain small ✏ by increasing k 1 , k 2 in Algorithm 3.
this will mean we select large number of groups, and subsequently increases.
contrasting
NeurIPS
train_365
If a strong prior is close to the true values, the Bayesian posterior will be more accurate than the empirical point estimate.
a strong prior peaked on the wrong values will bias the Bayesian model away from the correct probabilities.
contrasting
NeurIPS
train_366
Monte-Carlo Tree Search (MCTS) has been successfully applied to very large POMDPs, a standard model for stochastic sequential decision-making problems.
many real-world problems inherently have multiple goals, where multiobjective formulations are more natural.
contrasting
NeurIPS
train_367
For short trees, the two perform equally, SSR beating AR slightly for trees with three nodes, which is not surprising since ly performs exact inference in this tiny topology configuration.
as trees get taller, the es more difficult, and only AR manages to maintain good performances.
contrasting
NeurIPS
train_368
Obviously, if we knew for each policy which subchains it induces on M (the MDP's ergodic structure), UCRL could choose an MDP Mt and a policy πt that maximizes the reward among all plausible MDPs with the given ergodic structure.
only the empiric ergodic structure (based on the observations so far) is known.
contrasting
NeurIPS
train_369
The lower bound ℓ ij is obtained from the extreme case of setting the missing values in a way that the two sets have the fewest features in their intersection while having the most features in their union.
the upper bound µ ij is obtained from the other extreme.
contrasting
NeurIPS
train_370
There are well-known lower bounds for multi-armed bandit problems and other online learning with partial-information settings.
they crucially depend on the semantics of the information feedback considered.
contrasting
NeurIPS
train_371
Variational autoencoders (VAE) train an inference network jointly with the parameters of the forward model to maximize a variational lower bound [15,5,11].
the use of a parametric variational distribution means they typically have limited capacity to represent complex, potentially multimodal posteriors, such as those incorporating discrete variables or structural uncertainty.
contrasting
NeurIPS
train_372
To compare the training cost, if the time cost of the related task (regression or classification) with M features is C(M ), LKRF and EERF simply spend that budget.
running K iterations of our method (with M a multiple integer of K), assuming that repetitive features are not selected, the training cost of MFGA would be P K k=1 C(kM/K), which is more than LKRF and EERF.
contrasting
NeurIPS
train_373
In Figure 2(a), the true class only has 10% more correct instances over wrong ones.
the true has 37.5% more correct instances in Figure 2(b).
contrasting
NeurIPS
train_374
Similar to [11], Algorithm 1 is also greedy and based on keeping track of the supernodes.
the definition of a supernode and its updating are different.
contrasting
NeurIPS
train_375
The point process generalized linear model (GLM) has provided a useful and highly tractable tool for characterizing neural encoding in a variety of sensory, cognitive, and motor brain areas [1][2][3][4][5].
there is a substantial gap between descriptive statistical models like the GLM and more realistic, biophysically interpretable neural models.
contrasting
NeurIPS
train_376
⇤ As in the proof of Theorem 1, the main idea is to show that one can design two distributions that are indistinguishable to a learner who can observe no more than d 1 attributes of any sample given by the distribution (i.e., that their marginals over any choice of d 1 attributes are identical), but whose respective sets of "-optimal regressors are disjoint.
in contrast to Theorem 1, both handling general d along with switching to the absolute loss introduce additional complexities to the proof that require different techniques.
contrasting
NeurIPS
train_377
Because of the match between the parallelism offered by hardware and the parallelism in machine-learning algorithms, mixed analog-digital VLSI is a promising substrate for machine-learning implementations.
custom VLSI solutions are costly, inflexible, and difficult to design.
contrasting
NeurIPS
train_378
Due to the square loss in RPCA, the sparse matrix S can be calculated by subtracting the low-rank matrix L from the observed data matrix.
nevertheless, in LVGGM, there is no closed-form solution for the sparse matrix due to the log-determinant term, and we need to use gradient descent to update S. both the algorithm in [40] and our algorithm have an initialization stage.
contrasting
NeurIPS
train_379
functions are not necessarily l.c., as is easily shown (e.g., a mixture of Gaussians with widely-separated means, or the indicator of the union of disjoint convex sets).
a key theorem (10,11) gives: Theorem (Integrating out preserves log-concavity).
contrasting
NeurIPS
train_380
In all experiments thus far, our models have been trained to make numeric predictions.
as discussed in the introduction, systematic numeric computation appears to underlie a diverse range of (natural) intelligent behaviors.
contrasting
NeurIPS
train_381
Although for certain GLM, e.g., sparse logistic regression, we can choose the step size parameter as ⌘ = 4⇤ 1 max ( 1 n P n i=1 x i x > i ), such a step size often leads to poor empirical performance.
as our theoretical analysis and experiments suggest, the proposed DC proximal Newton algorithm needs very few line search steps, which saves much computational effort.
contrasting
NeurIPS
train_382
There have thus been various important suggestions for the functional significance of synaptic depression, including -just to name a few -low-pass filtering of inputs [3], rendering postsynaptic responses insensitive to the absolute intensity of presynaptic activity [4,5], and decorrelating input spike sequences [6].
important though they must be for select neural systems, these suggestions have a piecemeal flavor -for instance, chaining together stages of low-pass filtering would lead to trivial responding.
contrasting
NeurIPS
train_383
In some applications, simple models (e.g., linear models) are often preferred for their ease of interpretation, even if they may be less accurate than complex ones.
the growing availability of big data has increased the benefits of using complex models, so bringing to the forefront the trade-off between accuracy and interpretability of a model's output.
contrasting
NeurIPS
train_384
Simultaneous-move games can be solved exactly in polynomial time using the backward induction algorithm [7,4], recently improved with alpha-beta pruning [8,9].
the depth-limited search algorithms based on the backward induction require domain knowledge (an evaluation function) and computing the cutoff conditions requires linear programming [8] or using a double-oracle method [9], both of which are computationally expensive.
contrasting
NeurIPS
train_385
We model aleatoric uncertainty with MAP inference using loss functions ( 8) and (12 in the appendix), for regression and classification respectively ( §2.2).
we derive the loss function using a Laplacian prior, as opposed to the Gaussian prior used for the derivations in §3.
contrasting
NeurIPS
train_386
The most elegant optical implementation of adaptive interconnection is through dynamic volume holography [6,11], but that requires a set of coherent optical signals, not what we have with an array of pulse emitters.
the matrix-vector multiplier architecture allows parallel interconnection of incoherent optical signals, and has been used to demonstrate implementations of the Hopfield model [7] and Boltzman machines [9].
contrasting
NeurIPS
train_387
There may be other models we haven't found that could beat the ones we have, and come closer to our proven envelope.
we suspect that the area constraint is not the bottleneck for optimizing memory at times less than O( M r ).
contrasting
NeurIPS
train_388
Its decision on labeling is done at once, not separately for each topic.
pMM also has a problem with multi-topic specific features such as "qbit" since it is impossible for texts to have such features given PMM's mixture process.
contrasting
NeurIPS
train_389
This massive data access requires an enormous amount of network bandwidth.
bandwidth is one of the scarcest resources in datacenters [6], often 10-100 times smaller than memory bandwidth and shared among all running applications and machines.
contrasting
NeurIPS
train_390
This algorithm and its analysis are novel to the best of our knowledge.
we note that a related greedy algorithm (that does not directly optimize the objective (2» called Group-OMP appears in [14,15].
contrasting
NeurIPS
train_391
In practice, one finds a single local optimum, projects to the subspace orthogonal to it and continues recursively on a lower-dimensional problem.
a naive implementation of this idea is unstable since approximation errors can accumulate badly, and to the best of our knowledge no rigorous analysis has been given prior to our work.
contrasting
NeurIPS
train_392
For both of these sequences, rough tracking (not shown) is possible without occlusion reasoning, since all fingers are the same color and the background is unambiguous.
we find that stability improves when occlusion reasoning is used to properly discount obscured edges and silhouettes.
contrasting
NeurIPS
train_393
The cost-average policy π 1 greedily selects the items that maximize the worst-case utility gain per unit cost increment if they are still affordable by the remaining budget.
the cost-insensitive policy π 2 simply ignores the items' costs and greedily selects the affordable items that maximize the worst-case utility gain.
contrasting
NeurIPS
train_394
The main difference between our works is the use of a linear scaling of the learning rate 9 , similarly to Krizhevsky (2014), and as suggested by Bottou (2010).
we found that linear scaling works less well on CIFAR10, and later work found that linear scaling rules work less well for other architectures on ImageNet (You et al., 2017).
contrasting
NeurIPS
train_395
This is likely possible to improve by tuning the trade-off between relevance and diversity, such as a making a more sophisticated choice of S and σ.
we leave this to future work.
contrasting
NeurIPS
train_396
For any δ > 0, the sequence This theorem can be easily generalized to hold for values of ε i that are not all equal (as done in [KOV15]).
this is not as all-encompassing as it would appear at first blush, because this straightforward generalization would not allow for the values of ε i and δ i to be chosen adaptively by the data analyst.
contrasting
NeurIPS
train_397
Most commercial LP software thus still relies on exact methods to solve the linear system.
some dual or primal (stochastic) sub-gradient descent methods have cheap cost for each iteration, but require O(1/ 2 ) iterations to find a solution of precision, which in practice can even hardly find a feasible solution satisfying all constraints [14].
contrasting
NeurIPS
train_398
Similar to our proposal this work suggests presenting trajectory data to an informed expert.
their queries require the expert to express preferences over approximate state visitation densities and to possess knowledge of the expected performance of demonstrated policies.
contrasting
NeurIPS
train_399
Theoretically, the position model should result in three identical peaks that are displaced in disparity.
the measurements show a wide variation in the peak sizes.
contrasting
NeurIPS