id
stringlengths
7
12
sentence1
stringlengths
5
1.44k
sentence2
stringlengths
6
2.06k
label
stringclasses
4 values
domain
stringclasses
5 values
train_200
Summary In our framework, SGLD and SGRLD take Q(z) = 0 and instead stress the design of the diffusion matrix D(z), with SGLD using a constant D(z) and SGRLD an adaptive, θ-dependent diffusion matrix to better account for the geometry of the space being explored.
hMC takes D(z) = 0 and focuses on the curl matrix Q(z).
contrasting
NeurIPS
train_201
This is like the Type-II transformation in [31], and we found that it worked better than Type-I for our experiments.
we aim to minimize F (x) in a novel way, so first we review the submodular vertex-cover problem.
contrasting
NeurIPS
train_202
However, to achieve this their algorithm runs the entire junction tree algorithm in each iteration, and does not reuse messages between iterations.
sample Propagation reuses all but one of the messages between iterations, leading to a greatly reduced "constant factor".
contrasting
NeurIPS
train_203
From high level, our proof constructs an estimation sequence { w (t) , M (t) , t } such that t → 0 and w * − w (t) 2 + M * − M (t) 2 ≤ t . In conventional matrix sensing, this construction is possible when the sensing matrix satisfies the Restricted Isometric Property (RIP) [Candès and Recht, 2009]: When A is 2 -norm δ k -RIP for any rank k matrix M , A A is nearly isometric [Jain et al., 2012], which implies M − A A(M )/n 2 ≤ δ.
then we can construct our estimation sequence as following: in gFM and symmetric rank-one matrix sensing, the 2 -norm RIP condition cannot be satisfied with high probability [Cai and Zhang, 2015].
contrasting
NeurIPS
train_204
However, in the real-world problems, it is hard to identify the conditions that unlabeled data can help.
it is interesting to explore the relation between the low density assumption and the manifold assumption.
contrasting
NeurIPS
train_205
Namely, one can find other simple mediators that satisfy these two properties.
we show that the Shapley mediator is the unique mediator to satisfy the fairness, economic efficiency and stability requirements.
contrasting
NeurIPS
train_206
[21] have shown that this strategy can improve modern language models like recurrent networks without retraining.
their model assumes that the data distribution changes smoothly over time, by using a context window to improve the performance.
contrasting
NeurIPS
train_207
Most of the existing analyses of multiarmed bandits with side information has focused on the adversarial (worst-case) model, where the sequence of rewards associated with each state-action pair is chosen by an adversary.
many problems in real-life are not adversarial.
contrasting
NeurIPS
train_208
Phase regularisation could be useful for tasks with a target phase, or where something is known about the phase characteristics of the system involved.
it is not explored in this paper.
contrasting
NeurIPS
train_209
Results show that the proposed 20 informative regions yield comparable matting performance to a fine-labelled trimap in terms of quality.
generating a fine trimap costs much users efforts, and it takes about 3 minutes to obtain a good alpha matte.
contrasting
NeurIPS
train_210
The objective of image retrieval is to quickly index and search the nearest images to a given query.
our goal is to localize objects in every single image of a dataset without supervision.
contrasting
NeurIPS
train_211
[9] demonstrates the speedup many orders of magnitude faster than the previous state of the art in the context of computing aggregates over the queries (such as the LSCV score for selecting the optimal bandwidth).
the authors did not discuss the sampling-based approach for computations that require per-query estimates, such as those required for kernel density estimation.
contrasting
NeurIPS
train_212
Unlike the other works [29,28] that use explicit attention parameters, MRN does not use any explicit attentional mechanism.
we observe the interpretability of element-wise multiplication as an information masking, which yields a novel method for visualizing the attention effect from this operation.
contrasting
NeurIPS
train_213
Finally, we exemplified our work by using KD-trees as the tree-consistent partition structure for generating the component-specific partitions in CS-EM, which limited its effectiveness in high dimensions.
any hierarchical partition structure can be used, and the work in [8] therefore suggest that changing to an anchor tree (a special kind of metric tree [15]) will also render CS-EM effective in high dimensions, under the assumption of lower intrinsic dimensionality for the data.
contrasting
NeurIPS
train_214
In the Gaussian setting, the minimum sample complexity can be improved to n = Ω(∆ 2 log p), i.e., when J min = Θ(1/ √ ∆) where the maximum degree scales as ∆ = Θ(log p log c) [7].
for Ising models, the minimum sample complexity can be further improved to n = Ω(c 4 log p), i.e., when J min = Θ(J * ) = Θ(1/c).
contrasting
NeurIPS
train_215
In particular when f (S * ) = f (∅) = 0, it returns large sets with large positive cost.
the deviation of the approximate edge weights ν i from the true cost is bounded [18].
contrasting
NeurIPS
train_216
WAGE [19] quantizes weights, activations, errors and gradients to 2, 8, 8 and 8 bits respectively.
all of these techniques incur significant accuracy degradation (> 5%) relative to full-precision models.
contrasting
NeurIPS
train_217
Economists and computer scientists are often concerned with inferring people's preferences from their choices, developing econometric methods (e.g., [1,2]) and collaborative filtering algorithms (e.g., [3,4,5]) that will allow them to assess the subjective value of an item or determine which other items a person might like.
identifying the preferences of others is also a key part of social cognitive development, allowing children to understand how people act and what they want.
contrasting
NeurIPS
train_218
We now define the expected reward to be the adaptive value of information of extracting the a'th set of features given the system state and budget B: Intuitively, (3) says that each time we add additional features to the computation, we gain reward equal to the decrease in error achieved with the new features (or pay a penalty if the error increases.)
if we ever exceed the budget, then any further decrease does not count; no more reward can be gained.
contrasting
NeurIPS
train_219
Thus, computing The value of Q t depends on the reward parameters θ t , the model, and the planning depth.
as we present below, the process of computing the gradient closely resembles the process of planning itself, and the two computations can be interleaved.
contrasting
NeurIPS
train_220
In general, it is hard to show an explicit separation result for ḡJ .
in simple models, we can do explicit computations to show separation.
contrasting
NeurIPS
train_221
'S' is a singleview anomaly since 'S' is located far from other instances in each view.
both views of 'S' have the same relationship with the others (they are far from the other instances), and then 'S' 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
contrasting
NeurIPS
train_222
TensorLog establishes a connection between inference using first-order rules and sparse matrix multiplication, which enables certain types of logical inference tasks to be compiled into sequences of differentiable numerical operations on matrices.
tensorLog is limited as a learning system because it only learns parameters, not rules.
contrasting
NeurIPS
train_223
Participants are required to choose one out of four possible materials.
it can still be challenging to distinguish between materials, especially when sampled ones have similar damping and specific modulus.
contrasting
NeurIPS
train_224
The auditing complexity of this algorithm can also be as large as Θ(log 2 (m)).
auditing allows us to beat this barrier.
contrasting
NeurIPS
train_225
Although fMRI is the most popular method for functional brain imaging with high spatial resolution, it suffers from poor temporal resolution since it measures blood oxygenation level signals with fluctuations in the order of seconds.
dynamic neuronal activity has fluctuations in the sub-millisecond time-scale that can only be directly measured with electromagnetic source imaging (ESI).
contrasting
NeurIPS
train_226
In higher dimensions (d ¸4), SMC did not Þnd all of the regions.
the MCMC algorithm found all of the regions, and did so in a reasonable amount of time.
contrasting
NeurIPS
train_227
Thus, a distributional assumption on P (X) does not restrict the set of covariate functions in any way.
specifying the conditional distribution, P (X|Y ), naturally entails restrictions on the form of P (Y |X).
contrasting
NeurIPS
train_228
We use partially pre-trained agents because random agents see few rewards in some of our domains.
this means we have to account for the budget (in terms of real environment steps) required to pretrain the data-generating agent, as well as to then generate the data.
contrasting
NeurIPS
train_229
These developments have made it possible to apply RNNs to new domains such as language translation [1,40] and parsing [44], and image and video captioning [7,45].
the current RNNs are designed to output each time one "token" in the input sequence, they can not handle properly the segment detection task in which each time a continuous chunk of the inputs is selected.
contrasting
NeurIPS
train_230
1BitSGD was experimentally observed to preserve convergence [35], under certain conditions; thanks to the reduction in communication, it enabled state-of-the-art scaling of deep neural networks (DNNs) for acoustic modelling [37].
it is currently not known if 1BitSGD provides any guarantees, even under strong assumptions, and it is not clear if higher compression is achievable.
contrasting
NeurIPS
train_231
Recently, requirements for various non-Gaussian convolutions have emerged and are continuously getting higher.
the handmade acceleration approach is no longer feasible for so many different convolutions since it is a time-consuming and painstaking job.
contrasting
NeurIPS
train_232
To extract convolutional features for source D s and target D t , the input source and target images (I s , I t ) are first passed through fully-convolutional feature extraction networks with shared parameters W F such that D i = F(I i |W F ), and the feature for each pixel then undergoes L 2 normalization.
in the recurrent formulation, at each iteration the target features D t can be extracted according to extracting each feature by transforming local receptive fields within the target image I t according to T i for each pixel i and then passing it through the networks would be time-consuming when iterating the networks.
contrasting
NeurIPS
train_233
Exact computation in these cases is often computationally intractable, which has led to many approximation algorithms, such as variational approximation [5], or loopy belief propagation.
most of these methods still rely on the propagation of the exact probabilities (upstream and downstream evidence in the case of belief propagation), rather than an approximation.
contrasting
NeurIPS
train_234
Distributions over matrices with exchangeable rows and infinitely many columns are useful in constructing nonparametric latent variable models.
the distribution implied by such models over the number of features exhibited by each data point may be poorly-suited for many modeling tasks.
contrasting
NeurIPS
train_235
A sufficiently weak dependence results in a monotonically increasing Bayes factor which favors the absence of the edge A +-B at any finite value of 0:.
given a sufficiently strong dependence between A and B, the log Bayes factor takes on positive values for all (finite) 0: exceeding a certain value 0:+ of the scale parameter.
contrasting
NeurIPS
train_236
As alluded to above, for the purpose of establishing analytic properties of the algorithm, we will assume comparisons are governed by the BTL model of pairwise comparisons.
the algorithm itself operates with data generated in arbitrary manner.
contrasting
NeurIPS
train_237
For example, [15] demonstrates that on USPS, using lasso and group lasso regularizations together outperforms models with a single regularizer.
they only consider the squared loss in their paper, whereas we consider a logistic loss which leads to better performance.
contrasting
NeurIPS
train_238
These estimators enjoy the parametric rate of O n −1/2 when β > d/4, and work by optimally estimating the density and then applying a correction to the plug-in estimate.
our estimator undersmooths the density, and converges at a slower rate of O n −β/(β+d) when β < d (and the parametric rate O n −1/2 when β ≥ d), but obeys an exponential concentration inequality, which is not known for the estimators of [8].
contrasting
NeurIPS
train_239
We can also derive upper bounds on the difference between LSIF and uLSIF and show that uLSIF gives a good approximation to LSIF.
we do not go into the detail due to space limitation.
contrasting
NeurIPS
train_240
In practice, their model is defined by the same forward equations as ours.
equation 3 which computes the backward vectors is instead: Incorrect "@Hannah Sunder: The Walking Dead is just a great tv show" its bad ass just started to watch the 2nd season to catch up with the 3rd Figure 4: Examples of predictions made by the GB-RNN for twitter documents.
contrasting
NeurIPS
train_241
Note that the GP does not guarantee that the predicted Q has rank P . Therefore, we do not truly guarantee that Z satisfies the constraints.
as shown in our experiments, the violation of the constraints induced by the factorization is much smaller than the one produced by doing prediction in the original variables.
contrasting
NeurIPS
train_242
In most cases, both Robust PCA and AMMC perform quite similarly (see Figure 5 in Appendix E).
in one case AMMC achieves 87.67% segmentation accuracy (compared with the ground truth, manually segmented), while Robust PCA only achieves 74.88% (Figure 3).
contrasting
NeurIPS
train_243
In contrast, neural networks employ massively-parallel, graded processing that can search out many possible solutions at the same time, and optimize those that seem to make graded improvements in performance.
the discrete character of structured representations requires exhaustive combinatorial search in high-dimensional spaces.
contrasting
NeurIPS
train_244
Empirical simulations showing that the other forward algorithms also suffer this regret are in Appendix F. An attractive feature of forward algorithms is that they generalize to partial orders, for which efficient offline optimization algorithms exist.
in Section 4 we saw that FAs only give a Õ(t − 1 2 ) rate, while in Section 3 we saw that Õ(t − 2 3 ) is possible (with an algorithm that is not known to scale to partial orders).
contrasting
NeurIPS
train_245
In [5], the consistency of SVM with additive kernel is established, where the kernel-norm regularizer is used.
the sparsity on variables and the learning rate are not investigated in previous articles.
contrasting
NeurIPS
train_246
We can see that OLS and SOLS require approximately the same number of iterations for comparable decrease in objective function value.
since the SOLS instance has a much smaller size, its per iteration computational cost is much lower than that of OLS.
contrasting
NeurIPS
train_247
We are focussing here on the same problem but using CSP extracted features and arrive at similar results.
in a theoretical part we show that using more classes can be worth the effort if a suitable accuracy of all pairwise classifications is available.
contrasting
NeurIPS
train_248
Furthermore, when some of the singular values of M fall below the "noise level" √ dσ, one can show a tighter bound, with a nearly-optimal bias-variance tradeoff; see Theorem 2.7 in [5] for details.
when M is full-rank, then the error of M depends on the behavior of the tail M c . We will consider a couple of cases.
contrasting
NeurIPS
train_249
When µ 1 = 1, µ 2 = 0, the attacker is interested in increasing the RMSE of the collaborative filtering system and hence reducing the system's availability.
when µ 1 = 1, µ 2 = −1 the attacker wishes to increase RMSE while at the same time keeping the rating of specific items (j 0 ) as low as possible for certain malicious purposes.
contrasting
NeurIPS
train_250
A more recent work proposes an architecture that resembles the structure of the hippocampus to facilitate continual learning for more complex data such as small binary pixel images [15].
none of them demonstrates scalability to high-dimensional inputs similar to those appear in real world due to the difficulty of generating meaningful high-dimensional pseudoinputs without further supervision.
contrasting
NeurIPS
train_251
In opposite, for subject one the maximum is achieved at 4.9 seconds, yielding a low steepness value.
a low value is also found for the submission of all other competitors.
contrasting
NeurIPS
train_252
Note the explicit incorporation of the reconstructive and discriminative component into sparse coding, in addition to the classical reconstructive term (see [9] for a different classification component).
since the classification procedure from Eq.
contrasting
NeurIPS
train_253
The very concept of a "reaction" can be ambiguous, as it corresponds to a macroscopic abstraction, hence simplification, of a very complex underlying microscopic reality, ultimately driven by the laws of quantum and statistical mechanics.
even for relatively small systems, it is impossible to find exact solutions to the Schrödinger equation.
contrasting
NeurIPS
train_254
Unfortunately, both variants suffer from a polynomial time complexity with a super-linear dependence on the dimensionality d (at least a power of 4), which renders them not practical for optimizing problems of high dimension.
second-order information carried by the Hessian has been utilized to escape from a saddle point, which usually yields an almost linear time complexity in terms of the dimensionality d under the assumption that the Hessian-vector product (HVP) can be performed in a linear time.
contrasting
NeurIPS
train_255
Thus, according to these schemes, patterns are stored wholesale in the hippocampus when they first appear, and are continually read back to cortex to cause plasticity along with the new information.
if the hippocampus is permanently required to prevent a catastrophe, then, first, there is no true consolidation: if neocortical plasticity is not inhibited by hippocampal damage, 20 then its integrity is permanently required to prevent degradation; and, second, what is the point of consolidation -couldn't the hippocampus suffice by itself?
contrasting
NeurIPS
train_256
It was shown by [8] that such clusterings are readily detected offline by classical batch algorithms.
we prove (Theorem 3.8) that no incremental method can discover these partitions.
contrasting
NeurIPS
train_257
In the example of Figure 3c with binary random variables, the model has 11 parameters.
these parameters are determined by the environment: To be adaptive in nonstationary environments, the model must be updated following each experienced state.
contrasting
NeurIPS
train_258
[2013] is closest to ours in also presenting a framework with three criteria related to discrimination control (group fairness), individual fairness, and utility.
the criteria are manifested less directly than in our proposal.
contrasting
NeurIPS
train_259
We agree with the general consensus that fast learning involves the feedforward connections.
by considering positional invariance for discrimination, we show that there is an inherently non-linear component to the overall task, which defeats feedforward algorithms.
contrasting
NeurIPS
train_260
Thin junction trees (graphs with low tree-width) are extensions of trees, where inference can be solved efficiently using the junction algorithm [7].
learning junction trees with tree-width greater than one is NP-complete [6] and tractable learning algorithms (e.g.
contrasting
NeurIPS
train_261
Existing approaches to clustering HMMs operate directly on the HMM parameter space, by grouping HMMs according to a suitable pairwise distance defined in terms of the HMM parameters.
as HMM parameters lie on a non-linear manifold, a simple application of the k-means algorithm will not succeed in the task, since it assumes real vectors in a Euclidean space.
contrasting
NeurIPS
train_262
A range-EEG feature has been proposed [23], which measures the peak-to-peak amplitude.
our approach learns frequency bands of interest and we can deal with long time series evaluated in our experiments.
contrasting
NeurIPS
train_263
All units and parameters at all levels of the network are engaged in representing any given input and are adjusted together during learning.
we argue that one-shot learning of new classes will be easier in architectures that can explicitly identify only a small number of degrees of freedom (latent variables and parameters) that are relevant to the new concept being learned, and thereby achieve more appropriate and flexible transfer of learned representations to new tasks.
contrasting
NeurIPS
train_264
Therefore EM algorithms can converge to local optima.
this problem can be alleviated using deterministic annealing as described in [9,10].
contrasting
NeurIPS
train_265
The statistical properties of PEM (and Maximum Likelihood) methods are well understood when the model structure is assumed to be known.
in real applications, first a set of competitive parametric models has to be postulated.
contrasting
NeurIPS
train_266
One line of work [3,4] divides game images into patches and applies a Bayesian framework to predict patch-based observations.
this approach assumes that neighboring patches are enough to predict the center patch, which is not true in Atari games because of many complex interactions.
contrasting
NeurIPS
train_267
The resulting model, called BicyleGAN, effectively achieves one-to-many image translations.
there are several differences with our method.
contrasting
NeurIPS
train_268
The AltMin technique has also been applied to many other estimation problems, such as matrix completion [19], phase retrieval [27], and mixed linear regression [44].
the current theoretical understanding of AltMin is still incomplete.
contrasting
NeurIPS
train_269
In numerical analysis this is typically a much-desired feature, leading to methods with improved stability and accuracy.
it is still a three-part procedure, analogous for example to paired Adams-Bashforth and Adams-Moulton integrators used in PEC mode (Butcher, 2008).
contrasting
NeurIPS
train_270
Like Nash equilibrium, a QPE and EFPE can be computed in polynomial time in the size of the input game.
the big-O complexity hides dramatically larger constants in the case of a QPE or an EFPE, and the algorithms known so far thus do not scale beyond small instances [ Čermák et al., 2014;Ganzfried and Sandholm, 2015].
contrasting
NeurIPS
train_271
The slope for LMC and SGLDFP is −1 which confirms the convergence of θn − θ to 0 at a rate N −1 .
we can observe that θn − θ converges to a constant for SGD and SGLD.
contrasting
NeurIPS
train_272
They primarily focus on nonlinear dynamics and an RNN-based variational family, as well as allowing control inputs.
the approach does not extend to general graphical models or discrete latent variables.
contrasting
NeurIPS
train_273
Under more realistic assumptions, i.e., more classes have increasing pairwise classification error compared to a wisely chosen subset it is improbable to increase the bit rate by increasing the number of classes higher than three or four.
this depends strongly on the pairwise errors.
contrasting
NeurIPS
train_274
Such strength and specialization is in agreement with data on climbing fibers in the cerebellum [18][19][20], who are believed to bring information about errors during motor learning [21].
in this model, the specificity of the error signals are defined by a weight matrix through which the errors are fed to the neurons.
contrasting
NeurIPS
train_275
That NoSDE games exist is surprising, in that randomness is needed even though actions are always taken with complete information about the other player's choice and the state of the game.
the next result is even more startling.
contrasting
NeurIPS
train_276
(2015), we use the method of moments for estimating latent-variable models.
those papers use it for parameter estimation in the face of non-convexity, rather than as a way to avoid full estimation of p(f v | y).
contrasting
NeurIPS
train_277
When (A2) holds, uniform confidence intervals of f on its level sets are easy to construct because little statistical efficiency is lost by slightly enlarging the level sets so that complete d-dimensional cubes are contained in the enlarged level sets.
when regularity of level sets fails to hold such nonparametric estimation can be very difficult or even impossible.
contrasting
NeurIPS
train_278
If the ith sensor is not involved in a reward communication event at that time, its global reward estimate is updated according to Y i (k) = αY i (k − 1) + r i (k).
at any time k that there is a communication event, its global reward estimate is updated according to , where j is the index of the sensor with which communication occurs.
contrasting
NeurIPS
train_279
Q*bert presents an interesting difference between human and synthetic preferences: on short timescales, the human feedback does not capture fine-grained reward distinctions (e.g., whether the agent covered one or two tiles) which are captured by the synthetic feedback.
on long timescales this does not matter much and both models align well.
contrasting
NeurIPS
train_280
Our model treats objection as a more challenging decision, thereby deserving higher weight.
the middle two sequences receive alternating votes.
contrasting
NeurIPS
train_281
The Diffusion Network(DN) is a stochastic recurrent network which has been shown capable of modeling the distributions of continuous-valued, continuoustime paths.
the dynamics of the DN are governed by stochastic differential equations, making the DN unfavourable for simulation in a digital computer.
contrasting
NeurIPS
train_282
Note that an adversarial divergence is not necessarily a metric, and therefore does not necessarily induce a topology.
convergence in an adversarial divergence can still imply some type of topological convergence.
contrasting
NeurIPS
train_283
Our design of the move kernel K t are based on two observations.
first, we can make use of U and σ V as auxiliary variables, effectively sampling this move would be highly inefficient due to the number of variables that need to be sampled at each update.
contrasting
NeurIPS
train_284
Adaptive schemes, where tasks are assigned based on the data collected thus far, are widely used in practical crowdsourcing systems to efficiently allocate the budget.
existing theoretical analyses of crowdsourcing systems suggest that the gain of adaptive task assignments is minimal.
contrasting
NeurIPS
train_285
This demonstrates that naively learning from synthetic images can be problematic due to a gap between synthetic and real image distributions -synthetic data is often not realistic enough with artifacts and severe texture losses, misleading the network to overfit to fake information only presented in synthetic images and fail to generalize well on real data.
with the injection of photorealistic and identity preserving faces generated by DA-GAN without extra human annotation efforts, our method outperforms b1 by 1.00% for TAR @ FAR=0.001 of verification and 1.50% for FNIR @ FPIR=0.01, 0.50% for Rank-1 of identification.
contrasting
NeurIPS
train_286
In the presence of a single agent, such problems have been studied in the context of offline policy learning [6,51] and online bandits (with imperfect information) [22,25,38].
the multi-agent learning setting is again under-explored; we leave that for future work.
contrasting
NeurIPS
train_287
Since in LCC each data is encoded by some anchor points on the data manifold, it can model the decision boundary of an SVM directly using f (x) ≈ v∈C γ v (x)f (v).
then by taking γ x as the input data of a linear SVM, f (v)'s can be learned to approximate the decision boundary f . OCC learns a set of orthogonal basis vectors, rather than anchor points, and corresponding coding for data.
contrasting
NeurIPS
train_288
One the one hand, MDS is using the pseudo-distance defined in equation 1, whose relationship with the real distance between two pixels in the original image is linear only in a small neighborhood.
isomap uses the geodesic distances in the neighborhood graph, whose relationship with the real distance is really close to linear.
contrasting
NeurIPS
train_289
an undirected network with n nodes where each couple of nodes (i, j) ∈ V 2 belongs to E independently of the others with probability p), the exact influence of a set of nodes is not known.
percolation theory characterizes the limit behavior of the giant connected component when n → ∞.
contrasting
NeurIPS
train_290
Specifically, because of the assumed independence of the {v i }, the EM method requires one to repeatedly maximize the Q-function such that the estimate of α at the (m + 1)th iteration is: Like the compound Dirichlet likelihood, the compound shadow Dirichlet likelihood is not necessarily concave.
note that the Q-function given in ( 7) is concave, because log p(v i |α) = − log |det(M )| + log p D,α M −1 v i , where p D,α is the Dirichlet distribution with parameter α, and by a theorem of Ronning [11], log p D,α is a concave function, and adding a constant does not change the concavity.
contrasting
NeurIPS
train_291
1, the activation of the last layer is therefore equal to where we defined the total weight matrix product W in the last expression, equal to the chain of matrix multiplications along all layers 1, 2, . . . , L. This expression makes obvious the uselessness of having multiple, successive linear layers, as their combined effect reduces to a single one.
the dynamics of learning (e.g.
contrasting
NeurIPS
train_292
PERSPECTIVE is the fastest among all methods and is 60% faster than SPHCONV, followed by DIRECT which is 23% faster than SPHCONV.
both baselines are noticeably inferior in accuracy compared to SPHCONV.
contrasting
NeurIPS
train_293
The two estimated masks achieve similar accuracy around 90%.
it is clear that the DNN mask misses significant portions of unvoiced speech, e.g., between frame 30-50 and 220-240.
contrasting
NeurIPS
train_294
The runtime of PDMM decreases as K increases from 21 to 61.
the speedup from 61 to 81 is negligable.
contrasting
NeurIPS
train_295
Technically speaking the parameters of both the component and corresponding cluster has to be updated for exact inference.
updating cluster parameters for every data instance removed will significantly slow down inference.
contrasting
NeurIPS
train_296
Variational methods are typically fast, and often produce high-quality approximations.
when the variational approximations are poor, estimates can be correspondingly worse.
contrasting
NeurIPS
train_297
So far, we have assumed the smoothness t of the true distribution P is known, and used that to tune the parameter ζ of the estimator.
in reality, t is not known.
contrasting
NeurIPS
train_298
CP-VTON [29] learns a thin-plate spline transformation for transforming the in-shop clothes into fitting the body shape of the target person via a Geometric Matching Module (GMM).
all methods above share a common problem, ignoring the deep feature maps misalignment between the condition and target images.
contrasting
NeurIPS
train_299
The GAN Generator is a deep non-linear transformation from latent to image space.
each GMM component is a simple linear transformation (Az + µ).
contrasting
NeurIPS