id
stringlengths
7
12
sentence1
stringlengths
5
1.44k
sentence2
stringlengths
6
2.06k
label
stringclasses
4 values
domain
stringclasses
5 values
train_800
This setting is analogous to the problem of joint source-channel coding.
just like before, each encoded bit must depend on at most ∆ bits.
contrasting
NeurIPS
train_801
subject to the constraints in (3).
it can be anticipated that no element in the estimated 𝚪 and 𝚿 will be exactly zero, resulting in a model which is not interpretable, i.e., poor identification of diseaserelated regions.
contrasting
NeurIPS
train_802
In general, when one considers the temporal dynamics of an episode, more than two goals, or more complicated reward structures, the relationship becomes more complicated.
information is useful in abstracting away that complexity, and preparing Alice generically for a plethora of possible task setups.
contrasting
NeurIPS
train_803
This problem is often tackled in the same manner as object recognition and localization in images.
extension to a temporal domain comes with many challenges.
contrasting
NeurIPS
train_804
With heterogeneous costs, the quality of an action k under a context j is roughly captured by its normalized expected reward, defined as η j,k = u j,k /c j,k .
the agent cannot only focus on the "best" action, i.e., k * j = arg max k∈A η j,k , for context j.
contrasting
NeurIPS
train_805
Let G * be the highest-scoring decomposable DAG from step 2.
the upper bounds obtained via BNSL can be at times can be quite weak when the network structures contain many immoralities.
contrasting
NeurIPS
train_806
Deep features learned by CNNs can disentangle explanatory factors of variations behind data distributions to boost knowledge transfer [19,17].
the latest literature findings reveal that deep features can reduce, but not remove, the cross-domain distribution discrepancy [3], which motivates the state of the art deep feature adaptation methods [5,6,7].
contrasting
NeurIPS
train_807
Recently, several works [19,14] have made efforts to extend ADMM to its online or stochastic versions.
they suffer relatively low convergence rates.
contrasting
NeurIPS
train_808
For these more difficult grids, however, STRIPES was the fastest of the algorithms, taking 0.5 -5 minutes.
the Nilsson and BMMF algorithms took 18 minutes and 2.5 -7 minutes, respectively.
contrasting
NeurIPS
train_809
To indicate the form of a representer theorem, suppose we solve for the optimal parameters Θ * = arg min Θ 1 n n i L(x i , y i , Θ) + g(||Θ||) for some nondecreasing g. We would then like our pre-activation predictions Φ(x t , Θ) to have the decomposition: Given such a representer theorem, α i k(x t , x i ) can be seen as the contribution of the training data x i on the testing prediction Φ(x t , Θ).
such representer theorems have only been developed for non-parametric predictors, specifically where Φ lies in a reproducing kernel Hilbert space.
contrasting
NeurIPS
train_810
Another approach is to pick from (m 1 , m 2 ) before reaching the factor f , so that the model becomes In this case, the message from x to f is not blurred, and the upward messages to (m 1 , m 2 ) are blurred, which is correct.
the downward messages from (m 1 , m 2 ) to f are blurred before reaching f , which is incorrect.
contrasting
NeurIPS
train_811
there is at least one pair of priors p i and p j with overlapping support), then one can do better by utilizing the "super prior" p = 1 k i p i within the original Theorem 3.2.
note that when the supports are disjoint, these two views (of multiple priors and a super prior) are equivalent.
contrasting
NeurIPS
train_812
This assumption is rather permissive and is satisfied by many first-order algorithms, e.g., SAG and SAGA [6].
the lower bound stated in the paper faces limitations in a few aspects.
contrasting
NeurIPS
train_813
It is worth pointing out that, similar to the MMTM, our PTM is not a generative model of networks per se because (a) empty and single-edge motifs are not modeled, and (b) one can generate a set of triangles that does not correspond to any network, because the generative process does not force overlapping triangles to have consistent edge values.
given a bag of triangular motifs E extracted from a network, the above procedure defines a valid probabilistic model p(E | α, λ) and we can legitimately use it for performing posterior inference p(s, θ, B | E, α, λ).
contrasting
NeurIPS
train_814
As work in [2] suggests, it is unlikely that a polynomial time algorithm can avoid such dependence.
once we are near the solution, as we show, this two-step procedure achieves the optimal error rate of s log p/n.
contrasting
NeurIPS
train_815
In some cases manual alignment to salient external events or behavioural time-course may be used to reduce temporal misalignment [8,9].
just as with variability in the trajectories themselves, temporal variations in purely internal states must ultimately be identified from neural data alone [10].
contrasting
NeurIPS
train_816
Overall both algorithms LINKAGE++ and Single-linkage perform considerably better when it comes to real-world data and LINKAGE++ and PCA+ dominate on our synthetic datasets.
in general there is no reason to believe that PCA+ would perform well in clustering truly hierarchical data: there are regimes of the HSBM for which applying only phase 1 of the algorithm might lead to a high missclassification error and high cost and for which we can prove that LINKAGE++ is an p1`εq-approximation.
contrasting
NeurIPS
train_817
Some families have an elliptical dependence structure, similar to the multivariate normal distribution.
it is also possible to use completely different dependence structures which are more appropriate for the data at hand.
contrasting
NeurIPS
train_818
Optimization problem 8 is a mixed-integer quadratic programming problem (MIQP), which is NPhard in general.
we can exploit the structure of the problem via a block-coordinate descent algorithm where W and {ρ i,m } are optimized iteratively: 1.
contrasting
NeurIPS
train_819
For example, a conversion from scales to frequencies can be estimated using the center frequency of the mother wavelet F c , F s = Fc s [10].
converting from scales to frequency is not useful unless prior knowledge about the signal is available or assumptions are made on the relevant frequency content of the signal.
contrasting
NeurIPS
train_820
Critically, this is where we diverge from previous analyses that assumed this distribution was factorised, or only trivially correlated due to reciprocal synapses being precisely (anti-)symmetric [1,2,4].
we explicitly study the emergence and effects of non-trivial correlations in the synaptic weight matrixdistribtion, because almost all synaptic plasticity rules induce statistical dependencies between the synaptic weights of each neuron (Fig.
contrasting
NeurIPS
train_821
Ideally, we would have that E g 2 2 ≤ α(a i x) for some value of α, as previous methods do.
instead we settle for a bound of the form E g 2 2 ≤ α(a i x)+β x 2 2 .
contrasting
NeurIPS
train_822
The conversion strategies in this family also begin by using A to generate the sequence of online hypotheses.
instead of relying on a single hypothesis from the sequence, they set h to be some combination of the entire sequence.
contrasting
NeurIPS
train_823
Kernel-or nearest-neighbor-based methods, including nearly all of the methods described in Section 3, tend to require storing previously observed data, resulting in O(n) space requirements.
orthogonal basis estimation requires storing only O(Z D n ) estimated Fourier coefficients.
contrasting
NeurIPS
train_824
For small dimensionalities eigenvalues are small and therefore there is no advantage for oc-shrinkage.
the higher the order of oc-shrinkage, the larger the error by projecting out spurious large eigenvalues which should have been subject to regularization.
contrasting
NeurIPS
train_825
A symmetric prior over Φ only makes a prior statement (determined by the concentration parameter β) about whether topics will have more sparse or more uniform distributions over words, so the topics are free to be as distinct and specialized as is necessary.
it is still necessary to account for power-law word usage.
contrasting
NeurIPS
train_826
Previous GP-based classifiers did not use f within a margin-based classifier as in (6), implemented here via p(u n ) = N (−λ n , γ −1 λ n ), where u n = 1−y n f n . It has been shown empirically that nonlinear SVMs and GP classifiers often perform similarly [8].
for the latter, inference can be challenging due to the non-conjugacy of multivariate normal distribution to the link function.
contrasting
NeurIPS
train_827
1(a)), 'MF' performs slightly better than our GenDeR when 300 ≤ k ≤ 400; and for the other values of k, the two methods mix with each other.
in terms of relevance (Fig.
contrasting
NeurIPS
train_828
Intuitively this is because EM attempts to maximize the overall likelihood.
our algorithm has significantly superior performance with respect to the edit distance which is the error in estimating the tree structure in the two components, as seen in Fig 2 .
contrasting
NeurIPS
train_829
When the target distribution is not Gibbs, we demonstrate that the second approach need not produce the optimal Gibbs distribution (with respect to log loss) even in the limit of infinitely many samples.
we prove that it produces models that are almost as good as the best Gibbs distribution according to a certain Bregman divergence that depends on the selection bias.
contrasting
NeurIPS
train_830
Dual decomposition methods are typically employed to solve inference tasks over combinatorial structures (e.g., [12; 13]).
we decompose the problem on two levels.
contrasting
NeurIPS
train_831
We have shown so far that our trained policies outperform appropriate baselines for the task when tested on novel environments.
we are still training and testing on the same settings, such as noise level, trajectory length and distribution of trajectories.
contrasting
NeurIPS
train_832
However, in cases where all of the input densities have little overlap with the product density, mixture IS performs very poorly (see Figure 4(c)).
multiscale samplers perform very well in such situations, because they can discard large numbers of low weight product density kernels.
contrasting
NeurIPS
train_833
Nguyen et al [23], Singh and Póczos [24], and Krishnamurthy et al [25] each proposed divergence estimators that achieve the parametric convergence rate (O 1 T ) under weaker conditions than those given in [1].
solving the convex problem of [23] can be more demanding for large sample sizes than the estimator given in [1] which depends only on simple density plug-in estimates and an offline convex optimization problem.
contrasting
NeurIPS
train_834
For an input image I and a given pair of kernels, we can measure the data log-likelihood by associating each window with the maximum likelihood kernel: We search for a blurring model p k0 such that, when combined with the model p 1 (derivatives of the unblurred image), will maximize the log-likelihood of the observed derivatives: One problem we need to address in defining the likelihoods is the fact that uniform areas, or areas with pure horizontal edges (the aperture problem) don't contain any information about the blur.
uniform areas receive the highest likelihoods from wide blur kernels (since the derivatives distribution for wide kernels is more concentrated around zero, as can be observed in figure 1(c)).
contrasting
NeurIPS
train_835
At first glance, incorporating supervision into the WMD appears computationally prohibitive, as each individual WMD computation scales cubically with respect to the (sparse) dimensionality of the documents.
we devise an efficient technique that exploits a relaxed version of the underlying optimal transport problem, called the Sinkhorn distance [6].
contrasting
NeurIPS
train_836
Subsequently, for any x * i ∈ [0, 1]: If x * i is the expected allocation of player i under the efficient allocation rule X * i (v) ≡ 1{v i = max j v j }, then taking expectation of Equation ( 9) over v i and adding across all players we get: The theorem then follows by invoking the fact that for any feasible allocation x: , using the fact that expected total agent utility plus total revenue at equilibrium is equal to expected welfare at equilibrium and setting µ = µ(D).
comparison with worst-case POA In the worst-case, µ(D) is upper bounded by 1, leading to the well-known worst-case price of anarchy ratio of the single-item first price auction of (1 − 1/e) −1 , irrespective of the bid distribution D. if we know the distribution D then we can explicitly estimate µ, which can lead to a much better ratio (see Figure 1).
contrasting
NeurIPS
train_837
Note that when the observations are either discrete or parametric, it is possible to estimate the distribution using O(1/" 2 ) samples to achieve " error in a suitable metric, say, using the maximum likelihood estimate.
the nonparametric setting is inherently more difficult and therefore the rate of convergence is slower.
contrasting
NeurIPS
train_838
Inference using deep neural networks is often outsourced to the cloud since it is a computationally demanding task.
this raises a fundamental issue of trust.
contrasting
NeurIPS
train_839
All synapses connecting the input and output layers are equally likely to be active during an anti-causal regime.
the increase in average contribution to the postsynaptic membrane potential for the correlated group of neurons renders this population slightly more likely to be active during the causal regime than any single member of the uncorrelated group.
contrasting
NeurIPS
train_840
In order to encourage the reinforcement learning agent to discover positive symptoms more quickly, a simple heuristic is to provide the agent with an auxiliary piece of reward when a positive symptom is queried, and a relatively smaller (or even negative) reward when a negative symptom is queried.
this heuristic suffers from the risk of changing the optimal policy of the original MDP.
contrasting
NeurIPS
train_841
The non-smoothness of f can be challenging to tackle.
in many cases of interest, the function f enjoys a favorable structure that allows to tackle it with smoothing techniques.
contrasting
NeurIPS
train_842
The Input-Output HMM [3] extends HMMs by conditioning both their dynamics and emission model on an input sequence.
the IOHMM is representationally limited by its simple discrete state in the same way as a HMM.
contrasting
NeurIPS
train_843
Note that when τ " 1 this recovers the setting in [8].
we empirically found that using a small τ would result in accumulated ambiguity when generating words in our experiment.
contrasting
NeurIPS
train_844
Note that degenerate cluster allocations are generally suboptimal under the objective (1), as they would lead to a reduction in the marginal entropy H(y).
it is intuitive that maximization of the mutual information I(x, y) favors hard assignments of cluster labels to equiprobable data regions, as this would result in the growth in H(y) and reduction in H(y|x).
contrasting
NeurIPS
train_845
Return MDP assessment (π τ , T τ ) and This learning process is not guaranteed to converge, so upon termination, it could return an optimal, δ-stable MDP assessment for some very large δ.
it has been shown to be successful experimentally in simultaneous auction games [24] and other large games of imperfect information [7].
contrasting
NeurIPS
train_846
Recall d β (x, x ′ ) from ( 1) and consider If the above value is finite, then using any value even slightly larger than this would declare any x ′ ̸ ∈ S(x) correctly as such, hence no false positives.
the above infimum may not exist for a general configuration.
contrasting
NeurIPS
train_847
The results in this paper are a generalization of the results of Zhang and Yu [24] to the online setting.
we emphasize that this generalization is nontrivial and requires di↵erent algorithmic ideas and proof techniques.
contrasting
NeurIPS
train_848
Several machine learning frameworks such as TensorFlow [1], MXNet [2], and Caffe2 [3], come with distributed implementations of popular training algorithms, such as mini-batch SGD.
the empirical speed-up gains offered by distributed training, often fall short of the optimal linear scaling one would hope for.
contrasting
NeurIPS
train_849
We thus advocate using it as a replacement for the elastic net.
we also show that the gap between the elastic net and the k-support norm is at most a factor of √ 2, corresponding to a factor of two difference in the sample complexity.
contrasting
NeurIPS
train_850
Projected on the important dimension, clusters will be concentrated into two distinct points.
when the Euclidean distance is adopted as in K-Means, it is difficult to recover true clusters because two "lines" are close to each other.
contrasting
NeurIPS
train_851
For instance, regression with a decomposable kernel boils down to solving a Sylvester equation (which can be done efficiently) [10] and vector-valued Support Vector Machine (SVM) without intercept can be learned with a coordinate descent algorithm [21].
these methods can not be used in our setting since the loss function is different and considering the intercept is necessary for the quantile property.
contrasting
NeurIPS
train_852
Secondly, only the healthy subjects were remunerated.
repeating the analyses presented using only the MDD subjects yields the same results (data not shown).
contrasting
NeurIPS
train_853
Nowadays, to translate programs between different programming languages, typically programmers would manually investigate the correspondence between the grammars of the two languages, then develop a rule-based translator.
this process can be inefficient and error-prone.
contrasting
NeurIPS
train_854
Ideally one should design convex relaxations for each domain of Θ.
m exhibits some nice properties for any Θ: M 0, M I, tr(M ) = tr((ΘΘ ) † (ΘΘ )) = rank(ΘΘ ) = rank(Θ).
contrasting
NeurIPS
train_855
Thus g t (L t ) becomes differentiable.
we have the following proposition about the gradient of f (L).
contrasting
NeurIPS
train_856
From this perspective HGMDPs appear to be a significant restriction over general POMDPs.
our first result shows that despite this restriction the worst-case complexity is not reduced even for deterministic dynamics.
contrasting
NeurIPS
train_857
A large number of previous studies of learning similarities have focused on metric learning, like in the case of a positive semidefinite matrix that defines a Mahalanobis distance [19].
similarity learning algorithms are often evaluated in a context of ranking [16,5].
contrasting
NeurIPS
train_858
These tasks are challenging, as code can be modified such that it syntactically differs (for instance, via different or reordered operations, or written in a different language altogether), but remains semantically equivalent (i.e., produces the same result).
these tasks are also ideal for machine learning, since they can be represented as classic regression and classification problems.
contrasting
NeurIPS
train_859
A similar situation was also observed in the server with 86G when n = 10 × 2 10 .
the memory required by BADMM is O(n 2 )-even when n = 15 × 2 10 (more than 0.2 billion parameters), BADMM can still run on a single GPU with only 5G memory.
contrasting
NeurIPS
train_860
The results are presented in Table 3 in Appendix F. In general, we observe P 3 & P 2 for all cases we have studied, which supports our conjecture.
this procedure is generally more time-consuming than EM for Model 2 since k!
contrasting
NeurIPS
train_861
[12] showed that the metric of the NSG corresponds with the changes in the stationary state-action joint distribution.
the metric of the NPG takes into account only changes in the action distribution and ignores changes in the state distribution, which also depends on the policy in general.
contrasting
NeurIPS
train_862
If we denote the predictions of the teacher by p(y|x, D N ) and the parameters of the student network by w, our objective becomes Unfortunately, computing this integral is not analytically tractable.
we can approximate this by Monte Carlo: where Θ is a set of samples from p(θ|D N ).
contrasting
NeurIPS
train_863
Ikeda pointed out that the problem we need to consider is the semiparametric model [10].
the problem remains unsolved.
contrasting
NeurIPS
train_864
As can be seen from Table 1, layer normalization (Ba et al., 2016) improves the performance of PAG significantly.
according to our results on En→De, layer norm affects the performance of rPAG only marginally.
contrasting
NeurIPS
train_865
The representations that the network computes at different layers are related to the inference in an implicit latent variable model but the designer of the model does not need to know about them.
it is actually tremendously valuable to understand what kind of inference is required by different types of probabilistic models in order to design an efficient network architecture.
contrasting
NeurIPS
train_866
We observe significant improvements in 5 datasets (acoustic, census, heart, ionosphere, letter), demonstrating the advantage of using second order information.
we found that Oja-SON was outperformed by ADA-GRAD on most datasets, mostly because the diagonal adaptation of ADAGRAD greatly reduces the condition number on these datasets.
contrasting
NeurIPS
train_867
Their approach, similar to ours, generates a set of ranked class-agnostic proposals.
our model generates segmentation proposals instead of the less informative bounding box proposals.
contrasting
NeurIPS
train_868
Character-level convolutional filters are used to shrink the size of the inputembedding matrix in [13].
it still suffers from the gigantic output-embedding matrix.
contrasting
NeurIPS
train_869
Therefore, the condition number depends only on the largest eigenvalue of This shows that for large regularization parameters, the condition number is close to one and convergence is fast.
for small regularization parameters, the condition number gets very large, even if X is well-conditioned.
contrasting
NeurIPS
train_870
This may be undesired if we were actually interested in predicting noisy labels.
our goal is to predict clean labels, and the proposed framework benefits from the regularization that is imposed on the variational distribution.
contrasting
NeurIPS
train_871
While in standard control settings the sensors are assumed fixed, biological systems often gain from the extra flexibility of optimizing the sensors themselves.
this sensory adaptation is geared towards control rather than perception, as is often assumed.
contrasting
NeurIPS
train_872
Performance on large, realistic datasets is inarguably a better metric of architecture quality than performance on smaller datasets such as PTB.
such metrics make comparison among models nearly impossible: performance on large datasets is non-standard because evaluation at this scale is infeasible in many research settings simply because of limited hardware access.
contrasting
NeurIPS
train_873
Computing the optimal twisting functions boils down to performing exact inference in the model, which is assumed to be intractable.
this is where the use of deterministic inference algorithms comes into play.
contrasting
NeurIPS
train_874
The online incremental gradient (or backpropagation) algorithm is widely considered to be the fastest method for solving large-scale neural-network (NN) learning problems.
we show that an appropriately implemented iterative batch-mode (or block-mode) learning method can be much faster.
contrasting
NeurIPS
train_875
As we increase the scale of W, performance of the vanilla-RNN improves, suggesting that the model is able to better utilize the input information.
mI-RNN is much more robust to different initializations, where the scaling has almost no effect on the final performance.
contrasting
NeurIPS
train_876
On the one hand, a successful learner uncovers substantial structure of the target distribution.
this objective is clearly impossible when the means and covariances are extremely close.
contrasting
NeurIPS
train_877
Solving this equation analytically is not always possible.
for a broad class of functions, we can obtain an analytic solution.
contrasting
NeurIPS
train_878
Formally, this can be seen by comparing the variation of the log-determinant and trace functions with respect to the eigenvalues of the PSD matrix K, The gradient of the log-determinant is largest in the direction of the smallest eigenvalue of the error covariance matrix.
the MSE gives equal weight to all directions of the space.
contrasting
NeurIPS
train_879
CNN has the advantage of parallel processing because there is no dependency between the input and output.
cNN usually requires a large amount of feature maps, while RNN only needs to store the current cell and output state vectors in each layer.
contrasting
NeurIPS
train_880
Figure 2 shows that OLS leads to accurate ATE estimation for Gaussian additive noise when the number of covariates is sufficiently large, which is consistent with Corollary 1.1.
for high dimensional data, matrix factorization preprocessing dominates all other feasible methods and its RMSE is very close to the oracle regression for sufficiently large number of covariates.
contrasting
NeurIPS
train_881
First, the two square-root terms of the bound depend on r in opposite ways: the first is monotonically increasing, while the second is monotonically decreasing.
one could expect to optimize the bound by minimizing over r. the bound also depends on r indirectly via other quantities (e.g.
contrasting
NeurIPS
train_882
For sufficiently smooth objectives, the same algorithm is also optimal even if prox access is allowed, since Theorem 2 implies a lower bound of: That is, for smooth objectives, having access to a prox oracle does not improve the optimal complexity over just using gradient access.
for non-smooth or insufficiently smooth objectives, there is a gap between ( 11) and ( 12).
contrasting
NeurIPS
train_883
Hence, if both players play what we call optimal strategies, then neither player can improve and they are at Nash equilibrium.
suppose player 1 selects a strategy σ 1 that does not guarantee him payoff at least v − c 2 (σ 2 ).
contrasting
NeurIPS
train_884
While simple, this has no obvious probabilistic interpretation, and other divergences perform better in the experiments below.
it also forms the basis of our projected gradient descent strategy for computing the projection in Eq.
contrasting
NeurIPS
train_885
[22]'s adversarial loss encourages the hidden-state dynamics of teacher-forced and soft-sampled sequences to be similar.
there remains a gap between the dynamics of these sequences and sequences hard-sampled at test time.
contrasting
NeurIPS
train_886
with zero phase-lag [13][14][15][16][17], or strictly sequential patterns as in synfire chains [18][19][20][21] (see figure 1b).
some experimental studies have suggested that cortical spiking activity may harbor motifs with more complex structure [5,22] (see figure 1c).
contrasting
NeurIPS
train_887
Other rigorous variational lower bounds on the softmax have been used before [4,5], however they are not easily scalable since they require optimizing data-specific variational parameters.
the bound we introduce in this paper does not contain any variational parameter, which greatly facilitates stochastic minibatch training.
contrasting
NeurIPS
train_888
Note that while the set of rank-L PSD matrices is non-convex, we can still project onto this set efficiently using the eigenvalue decomposition of where r = min( L, L + M ) and L + M is the number of positive eigenvalues of M . Λ M (1 : r) denotes the top-r eigenvalues of M and U M (1 : r) denotes the corresponding eigenvectors.
while the above update restricts the rank of all intermediate iterates M t to be at most L, computing rank-L eigenvalue decomposition can still be fairly expensive for large n. by using special structure in the update (6), one can significantly reduce eigenvalue decomposition's computation complexity as well.
contrasting
NeurIPS
train_889
Some of the techniques in maximizing multilinear extensions [13; 7; 8] have inspired this work.
we are the first to explore the rich properties and devise algorithms for the general constrained DR-submodular maximization problem over continuous domains.
contrasting
NeurIPS
train_890
The results here provide substantial insight into the nature of GAN optimization, perhaps even offering some clues as to why these methods have worked so well despite not being convex-concave.
we also emphasize that there are substantial limitations to the analysis, and directions for future work.
contrasting
NeurIPS
train_891
Normalizing the bin histograms can then give us probability estimates that can be used to calculate entropies.
a loss based on entropy calculated with hard counts cannot be used to regularize the network, since the indicator quantization function is non-differentiable.
contrasting
NeurIPS
train_892
Efficient implementations of tensor operators, such as matrix multiplication and high dimensional convolution, are key enablers of effective deep learning systems.
current systems rely on manually optimized libraries, e.g., cuDNN, that support only a narrow range of server class GPUs.
contrasting
NeurIPS
train_893
Specifically, the number of assets under management is usually much larger than the sample size of exploitable historical data.
extreme events are typical in financial asset prices, leading to heavy-tailed asset returns.
contrasting
NeurIPS
train_894
The normalization To convert an input matrix X 2 R p (p = mp 0 ) into a score vector t 2 R m , it seems that we need to learn a matrix W 2 R m⇥mp0 .
a natural permutation invariance requirement (if the documents associated are presented in a permuted fashion, the output scores should also get permuted in the same way) reduces the dimensionality of W to p 0 (see, e.g., [14] for more details).
contrasting
NeurIPS
train_895
It has been proved in (Nesterov and Polyak, 2006, Theorem 2) that all limit points of {x k } k generated by CR are second order stationary points.
the sequence is not guaranteed to be convergent and no convergence rate is established.
contrasting
NeurIPS
train_896
We formulate the minimization of α-divergence with α = ∞ within the fractional covering framework [24].
the standard iterative algorithm for solving fractional covering is not readily applicable to our problem due to its small stepsize.
contrasting
NeurIPS
train_897
Finite differencing, where the perturbation directions d are the n standard basis vectors, is a default approach for Jacobian estimation.
it requires n function evaluations which may be prohibitively expensive for large n. Another natural approach, when the number of measurements, say k, is smaller than n, is to estimate the Jacobian via linear regression, where an l 2 regularizer is added to handle the underdetermined setting and • F stands for the Frobenius norm.
contrasting
NeurIPS
train_898
For example, according to the LWF Markov property, in the chain graph model in Figure 1(a), x 1 ⊥ x 3 |x 2 as x 1 and x 3 are separated by x 2 in the moralized graph in Figure 1(b).
the corresponding AMP Markov property implies a different probabilistic independence relationship, x 1 ⊥ x 3 .
contrasting
NeurIPS
train_899
CRFs with sufficiently expressive feature representation are consistent estimators of the marginal probabilities of variables in cliques of the graph [9], but are oblivious to the evaluative loss metric during training.
sSVMs directly incorporate the evaluative loss metric in the training optimization, but lack consistency guarantees for multiclass settings [10,11].
contrasting
NeurIPS